BrightData commited on
Commit
fa0fefc
1 Parent(s): 14461ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -88,18 +88,18 @@ To explore additional free and premium datasets, visit our website [brightdata.c
88
 
89
  The data collection process involved extracting information directly from Goodreads, ensuring comprehensive coverage of the required attributes. Once collected, the data underwent several stages of processing:
90
 
91
- - Parsing: Extracted raw data was parsed to convert it into a structured format.
92
- - Cleaning: The cleaning process involved removing any irrelevant or erroneous entries to enhance data quality.
93
 
94
- ### Validation
95
 
96
  To ensure data integrity, a validation process was implemented. Each entry is checked across various attributes, including:
97
 
98
- - Uniqueness: Each record was checked to ensure it was unique, eliminating any duplicates.
99
- - Completeness: The dataset was examined to confirm that all necessary fields were populated or filled, with missing data addressed appropriately.
100
- - Consistency: Cross-validation checks were conducted to ensure consistency across various attributes, including comparison with historical records.
101
- - Data Types Verification: Ensured that all data types were correctly assigned and consistent with expected formats.
102
- - Fill Rates and Duplicate Checks: Conducted comprehensive checks to verify fill rates, ensuring no significant gaps in data, and rigorously screened for duplicates.
103
 
104
  This ensures that the dataset meets the high standards of quality necessary for analysis, research and modeling.
105
 
 
88
 
89
  The data collection process involved extracting information directly from Goodreads, ensuring comprehensive coverage of the required attributes. Once collected, the data underwent several stages of processing:
90
 
91
+ - **Parsing**: Extracted raw data was parsed to convert it into a structured format.
92
+ - **Cleaning**: The cleaning process involved removing any irrelevant or erroneous entries to enhance data quality.
93
 
94
+ ### Validation:
95
 
96
  To ensure data integrity, a validation process was implemented. Each entry is checked across various attributes, including:
97
 
98
+ - **Uniqueness**: Each record was checked to ensure it was unique, eliminating any duplicates.
99
+ - **Completeness**: The dataset was examined to confirm that all necessary fields were populated or filled, with missing data addressed appropriately.
100
+ - **Consistency**: Cross-validation checks were conducted to ensure consistency across various attributes, including comparison with historical records.
101
+ - **Data Types Verification**: Ensured that all data types were correctly assigned and consistent with expected formats.
102
+ - **Fill Rates and Duplicate Checks**: Conducted comprehensive checks to verify fill rates, ensuring no significant gaps in data, and rigorously screened for duplicates.
103
 
104
  This ensures that the dataset meets the high standards of quality necessary for analysis, research and modeling.
105