About data source

#1
by Chenyu-Zhou - opened

Hello, I found the amazing dataset casually, and I want to know if there is any documentation related to this dataset and how the "scam" column was generated?

Hi!

I am glad this dataset was of help. It was constructed for a school project. Down below you can find a section in which I explain the process of building the dataset and how the "scam" colum is generated:

####################################################################################################################################################

One of the most, if not the most, important parts of any deep learning process is the data used. For our projects first we need to locate a list of projects that resulted to be a scam in one way or another. For this part, we recovered a list of more that 400 scam crypto coins from Comparitech.com (Moody, 2023) (https://www.comparitech.com/crypto/cryptocurrency-scams/ ).
The data was already well structured, and it had a column which specified whether the scam was a coin or an NFT, and we filtered out the latter.
In order to get the textual data, we used the snscrape webscraper package. This package uses the Twitter search API to get as many tweets as desired, from a specific time period, given a specific keyword. The keyword we used was the name of the scam project itself, and the time period was set for the year prior to the date when the project was proved to be a scam. Given this method we had to do some cleaning and discard some projects as they were named after common things that would return tweets unrelated to the project (e.g., “Token Coin” or “Merlin”). We also set up a maximum number of Tweets of 10000 per project.
Once we retrieved the scam Tweets, we went on to further scrape the “legit” coins, for which we used the same method as described above, this time, by selecting the top 100 cryptocurrencies. We also faced the same issue, and we had to get rid of projects such as “Celsius”.
After the presentation of our project to the class, we got some feedback regarding this step, mentioning that perhaps choosing the top 100 was not the best approach and that our model could potentially learn to classify top 100 coins and not top 100, instead of scam vs not scam. We believe this feedback was really useful, consequently, we selected the top 1000 coins and randomly selected 100 out of them and tried to run the webscraping script. Unfortunately, Twitter’s new leadership decided to implement limits on free API access, making the process of scrapping Tweets at scale, extremely expensive/lengthy, so we had to do with the data we collected before.
Even though it was not possible to implement that part of the feedback, we did our best to treat the data at hand, to make it useful and relevant to our purpose. Firstly, we deleted the name of the projects we used to retrieve the Tweets from each one of them, so that the model would not learn to differentiate by the names. Since we also had only the project names and not the tickers, we also substituted them for the string “[ticker]” so that the model doesn't learn to differentiate using those. Our goal is to just have text/characters for the model to check whether a model is a scam, and that the model does not differentiate within projects but only from scam or not.
E.g.:
- “Look at Bitcoin, it has gone up 10% during the past 48 hours $BTC”
Would then turn into
- “Look at , it has gone up 10% during the past 48 hours [ticker]”

Following the editing above, we also decided to keep the emojis in url and mentions as they are, since we believe they can have significance by themselves. The aforementioned covers all the cleaning and preprocessing of the data, after this we have to decide what part of the data we will use and how will we use it.

After some trial and error, we decided to drop all the tweets of less than 3 words, the reasoning back this is that we consider these to be just moat, which contain rather little information, and may even be detrimental to training times. Furthermore, we have an overarching worry about whether a single Tweet would be enough to flag accurately a fraudulent project. We ended deciding to pack the Tweets into batches of 3 of the same projects. Therefore, when making use of our scam detection model, users would need to input at least 3 tweets and preferably multiple of that, however, the model also works with non-multiples of 3 even though the accuracy might not be as good.

Once that we have cleaned the data and organized properly, we continue by tokenizing it and encoding it. For the use of large general models (Bert and RoBERTa) we will tokenize it later on, and we will discuss it on the part of the report discussing those models. The tokenization process that we did at this point is relevant to our CNN model and the bag of words approach. We use two types of tokenization, character and word, for the former we take every single character (including emojis) in a tweet, and we create a vocabulary dictionary, looking as such:

  • {‘G’:1, ‘@’:2, etc…}

For the latter method of tokenization, we use the TweetTokenized() function from the NLTK package, which is optimized for word tokenization using Tweeter data. Then, we put all tokens into a list and turn them into a set, to delete duplicates, after that, we create a vocabulary dictionary with the same architecture as described above. Finally, we add two columns to the final data one for the encoded data using character tokenization, and the other one for the encoded data using word tokenization.

####################################################################################################################################################

Let me know if you have any further questions, I will be glad to help.

Thank you so much for your incredibly detailed description! I am truly grateful. May I kindly inquire if there is any reference available for your outstanding work, including datasets or models? If I have the opportunity to apply your data in my future research, I would be absolutely thrilled to give proper credit to your amazing work.

That's truly flattering! You may link this repository if you want so that others may benefit from the data. Let me know if you do end up using this data for your research, I would love to read it.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment