Greg647465374's picture
update
777c4c3
---
license: agpl-3.0
task_categories:
- text-classification
- feature-extraction
- text-generation
sub_categories:
- text-classification
- code-understanding
- semantic-analysis
language:
- en
tags:
- code
- art
- biology
- synthetic
- rust
- ast
- emoji
- code-analysis
pretty_name: rust_ast_emoji
size_categories:
- 100K<n<1M
---
# Rust AST Emoji Dataset
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/meta-introspector/solfunmeme-dioxus)
- **Paper:** [If applicable]
- **Point of Contact:** [Your contact information]
- **Huggingface Hub:** [Dataset link when published]
### Dataset Summary
This dataset contains Rust codebase AST (Abstract Syntax Tree) analysis with emoji mapping for code understanding and visualization. The dataset provides a unique perspective on code structure by mapping AST node types and extracted words to emojis, enabling creative code analysis and visualization.
### Supported Tasks and Leaderboards
- **Code Understanding:** Analyze code structure through emoji patterns
- **Code Classification:** Identify code domains (Crypto, Web, i18n, etc.) through emoji signatures
- **Code Visualization:** Create emoji-based code summaries and visualizations
- **Pattern Recognition:** Discover common coding patterns through emoji frequency analysis
### Languages
The dataset contains Rust source code with English comments and identifiers.
## Dataset Structure
### Data Instances
Each instance contains:
- **file_path:** Path to the original Rust source file
- **timestamp:** Unix timestamp of analysis
- **ast:** Full AST representation in JSON format
- **summary:** Analysis summary including:
- `top_level_nodes`: Number of top-level AST nodes
- `total_nodes`: Total number of AST nodes
- `type_counts`: Count of each AST node type
- `string_literals`: Extracted string literals
- `word_counts`: Word frequency analysis
- `word_emoji_counts`: Emoji mapping for words
- `emoji_counts_in_strings`: Emojis found in string literals
### Data Fields
- `file_path` (string): Path to the original Rust source file
- `timestamp` (int64): Unix timestamp of analysis
- `ast` (string): Full AST representation in JSON
- `summary` (map): Analysis summary with nested fields:
- `top_level_nodes` (int64): Number of top-level AST nodes
- `total_nodes` (int64): Total number of AST nodes
- `type_counts` (map): Count of each AST node type
- `string_literals` (sequence): Extracted string literals
- `word_counts` (map): Word frequency analysis
- `word_emoji_counts` (map): Emoji mapping for words
- `emoji_counts_in_strings` (map): Emojis found in string literals
### Data Splits
- **train:** All analyzed Rust files
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The dataset was created by analyzing Rust source files from the solfunmeme-dioxus project, which includes:
- Core application code
- Vendor dependencies
- Generated code
- Test files
#### Who are the source language producers?
The source code was written by developers working on the solfunmeme-dioxus project, including contributions from the open-source community.
### Annotations
#### Annotation process
The annotation process involved:
1. **AST Parsing:** Using syn crate to parse Rust source files into ASTs
2. **Emoji Mapping:** Mapping AST node types and extracted words to emojis based on semantic categories
3. **Analysis:** Extracting string literals, word frequencies, and emoji patterns
4. **Chunking:** Splitting large datasets into manageable chunks (1MB each)
#### Who are the annotators?
The annotations were generated automatically using a custom Rust script that implements emoji mapping based on predefined categories.
### Personal and Sensitive Information
The dataset contains only code analysis data and does not include personal or sensitive information. All file paths are relative to the project structure.
## Additional Information
### Dataset Curators
The dataset was curated as part of the solfunmeme-dioxus project development process.
### Licensing Information
This dataset is licensed under AGPL-3.0, the same license as the source codebase.
### Citation Information
```bibtex
@dataset{rust_ast_emoji,
title={Rust AST Emoji Dataset},
author={solfunmeme-dioxus contributors},
year={2024},
url={https://github.com/meta-introspector/solfunmeme-dioxus}
}
```
### Contributions
Contributions to improve the dataset, emoji mappings, or analysis methods are welcome through the project's GitHub repository.
## Usage Examples
### Basic Usage
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("h4/solfunmeme-dioxus-reports")
# Access a sample
sample = dataset["train"][0]
print(f"File: {sample['file_path']}")
print(f"Top-level nodes: {sample['summary']['top_level_nodes']}")
print(f"Total nodes: {sample['summary']['total_nodes']}")
```
### Emoji Analysis
```python
# Analyze emoji patterns
emoji_counts = sample['summary']['word_emoji_counts']
for emoji, count in emoji_counts.items():
print(f"{emoji}: {count}")
```
### Code Domain Detection
The dataset enables detection of code domains through emoji patterns:
- 🌡 (Agave): Solana/blockchain code
- 🎨 (CSS): Frontend/styling code
- πŸ”’ (Crypto): Security/cryptography code
- 🌐 (i18n): Internationalization code
## Technical Details
### Chunking Strategy
The dataset is split into chunks of maximum 1MB each to comply with Hugging Face and GitHub file size limits. Each chunk contains multiple code analysis examples.
### Emoji Mapping Categories
The emoji mapping covers several categories:
- **Rust Core:** Basic Rust language constructs (πŸ¦€βš™οΈ, πŸ›οΈπŸ§±, etc.)
- **Web/CSS:** Frontend and styling concepts (πŸ“, 🧭, etc.)
- **Crypto/Security:** Cryptography and security (πŸ”’, πŸ”‘, etc.)
- **Project-Specific:** Domain-specific terms (🌡, 🌞, etc.)
- **Internationalization:** i18n and localization (🌐, 🌍, etc.)
- **Testing/Benchmarking:** Testing and performance (⏱️, πŸ‹οΈ, etc.)
### Performance Considerations
The dataset is optimized for:
- **Memory efficiency:** Compact JSON serialization
- **Accessibility:** Small chunk sizes for easy loading
- **Scalability:** Organized directory structure for large datasets