Datasets:
The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 299, in get_dataset_config_info for split_generator in builder._split_generators( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 60, in _split_generators self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f)) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 2325, in read_schema file = ParquetFile( File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 318, in __init__ self.reader.open( File "pyarrow/_parquet.pyx", line 1470, in pyarrow._parquet.ParquetReader.open File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Parquet file size is 5 bytes, smaller than the minimum file footer (8 bytes) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response for split in get_dataset_split_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 353, in get_dataset_split_names info = get_dataset_config_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 304, in get_dataset_config_info raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
About the data
These are partial results from The Geography of Human Flourishing Project analysis for the years 2010-2023.
This project is one of the 10 national projects awarded within the Spatial AI-Challange 2024, an international initiative at the crossroads of geospatial science and artificial intelligence.
At present only a subset of data for 2010-2012 are present.
Data are in the form of CSV or parquet.
In the datasets, FIPS is the FIPS code for a US state, county is the US county id, according to US Bureau of Census.
This data contain 46 Human Flourishing dimensions plus migration mood and corruption perception.
A reference paper will be uploaded.
How to get the data with python
from datasets import load_dataset
# Load the CSV
df_csv = load_dataset("siacus/flourishing", data_files="flourishingStateYear.csv").to_pandas()
# Load the Parquet
df_parquet = load_dataset("siacus/flourishing", data_files="flourishingStateYear.parquet").to_pandas()
How to get the data with R
There is no direct equivalent to datasets::load_dataset()
from Hugging Face yet, so you can try this:
# Load the CSV
library(data.table)
df_csv <- fread("https://huggingface.co/datasets/siacus/flourishing/resolve/main/flourishingStateYear.csv")
# Load the Parquet
library(arrow)
df_parquet <- read_parquet("https://huggingface.co/datasets/siacus/flourishing/resolve/main/flourishingStateYear.parquet")
This dataset contains also two shape files archives
cb_2021_us_county_20m.zip
taken from here and
cb_2021_us_state_20m.zip
taken from here which can be useful to visualize the maps in python.
Unfortunately US Census Bureau web site is stopping download from bots/scripts, so we provide the files here.
To get them from python use this code
import geopandas as gpd
states = gpd.read_file("https://huggingface.co/datasets/siacus/flourishing/resolve/main/cb_2021_us_state_20m.zip")
counties = gpd.read_file("https://huggingface.co/datasets/siacus/flourishing/resolve/main/cb_2021_us_county_20m.zip")
- Downloads last month
- 197