id
int64 1.57k
21k
| project_link
stringlengths 30
96
| project_description
stringlengths 1
547k
⌀ |
---|---|---|
9,987 | https://devpost.com/software/gocomputeme-gqdk37 | Inspiration
We were talking about how to do simulate genetic sequencing and eventually got to crowdsourcing computing power to fund scientific research to do intense simulations and projects
What it does
This serves a webbassembly programing to people willing to help with research to help boost the speed of research.
How I built it
Combined code written in C++, JavaScript, and Python. They all work in harmony.
Challenges I ran into
Compiling webassembly is still a poorly documented process, so compiling webassembly apps were rather difficult.
Accomplishments that I'm proud of
This works and it works after four hours.
What I learned
We reflectively learned webassembly.
What's next for GoComputeMe
We flesh it out since the current set up is very simple. We will most likely add: aesthetic improvements, computational improvements, more testing involvement.
Built With
flask
react
webassembly
Try it out
github.com |
9,987 | https://devpost.com/software/getdone-pvz5yw | Preview of the App
Landing Web page
About Page
Inspiration
A productivity app to help one focus during these hard times. As a team, we aim to be more productive and focused on our long term goals, that why we made this simple application to help guide us to the right path.
What it does
Simply enter a task you would want to achieve, followed by the duration of that task. When the duration of the task is over, you will be asked to do a "fun task" such as hugging your mom or make someone smile, then you will return back to work and repeat the entire process. All the data (tasks, times, etc) are all kept on local storage as of now.
How I built it
Since we only have 24hrs for this, we made this project by using HTML, CSS, and vanilla javascript.
Challenges I ran into
Making a leveling system to reward productive users, which in the end we had to change ideas, and stick to the base
level implementation.
Accomplishments that I'm proud of
Working under pressure, making the web app from scratch, and learning new javascript skills.
What I learned
How to code smarter in js, and work under pressure with members across the world.
What's next for getDone
Implement a DB, build a leveling up system to reward productive users. Challenege other users to see determine who is more productive.
Built With
css3
html5
javascript
Try it out
github.com
codingxllama.github.io |
9,987 | https://devpost.com/software/pet-content-to-make-people-happy | Inspiration: With the coronavirus, so many people are bored at home. I wanted to make something that makes it easier for people to access to content they need (especially for pets)
What it does: It accesses the Unsplash API for images, Google Web for link, and YouTube for videos. You can search up content for cat, dogs, rabbits, or anything else.
How I built it: I used React and Redux
Challenges I ran into: I started learning these programming languages a week ago, so
I was still confused with the syntax. However I learned a lot more, in regards to the languages
Accomplishments that I'm proud of: I learned how to use multiple APIs
What I learned: This was my first hack-a-thon and I started learning React and Redux under 1 week ago. So, I
What's next for Pet Content to make people happy: A description at the top and able to load more than 5 results from each API.
Built With
react
redux
search
semantic
unsplash
youtube
Try it out
codepen.io |
9,987 | https://devpost.com/software/my-portfolio-1r9ayp | Inspiration
At the beginning of this Hackathon I had no idea what to build, so I thought I should participate in " Build and Deploy your first Website" workshop, the idea of building my portfolio came to my mind.
What it does
My portfolio contains a lot of information about me since I attended the university,
It contains a short "About me" with my LinkdIn, Github and Devpost profiles links, my studies tracks, my programming skills and other technical skills, conferences and events and certifications i've attended (not updated), project that i've realised and finally where do I live and how to contact me.
How I built it
I built my portfolio using HTML5, CSS3, JavaScript and Bootstrap framework.
Challenges I ran into
The most difficult part is positioning components, spacing between components, visibility of the content and the limited time.
Accomplishments that I'm proud of
Now that I have a portfolio and built it during MLH "HackAt Home" hackathon I'm so proud of my self.
What I learned
What's next for My Portfolio
I will update my portfolio monthly and maybe change some design add some information.
Built With
bootstrap
css3
glitch
html5
javascript
Try it out
habbou-ayoub-portolio.glitch.me |
9,987 | https://devpost.com/software/octocat-game | Home Page
Here is the game, let's play
Welcome Page & Rules to play the game
Octocat-Game
A game on Octocats, build during Hack At Home Hackathon!
Game octocats are build for HTML,CSS and JavaScript languages
Challenges are coming on front-end but are faced for the awesome team work build the octacat game
My team members are already a full stack developer but my team learned the specially team work.
Built With
bootstrap
css3
html5
javascript
jquery
mern
Try it out
github.com |
9,987 | https://devpost.com/software/karo-na-f54ity | Stats of India
Home Screen
Precautions
Helpline
Corona News
Considering all that has been going on, this pandemic COVID-19, we’ve tried to come up with a solution to make us more comprehensive and clear about what’s actually going in our world. This, COVID-19 is more widespread than severe acute respiratory syndrome, more infectious than seasonal influenza and has already killed the more people than Ebola. This is a virus with a huge lethal potential.
But, there where is awareness and information, there are precautions and betterment. People can only act if actually know about things. And although if we talk about India, we are doing better than a lot of countries till now but what future brings, no one knows. Our lives have been a complete mess since this virus entered our home, India. But what is more troubling or we can say, what can cause more trouble is the fact that we are gonna be helpless if we don’t know the facts and what to do. To make things easier, we built this app to make people aware of everything that is even slightly related to this dangerous virus.
Awareness is the key, they say. Awareness is the key, we believe. Measure are to be taken to fight this virus and of course sharing them with your loved ones should also be a thing. And counts/stats, everything is relied onto the counts of deaths attributed to this virus, to monitor the impact of disease and to guide responses which has been playing a significant role about what needs to be one.
Media also plays an important role in our community, so that they can attain all the necessary information and facts and brief them to the society but we all know there has been a lot of rumor spread going on and there is no transparency. People don’t really know what to believe and what not.
And even if something goes wrong, helpline is something that we don’t want, but NEED. Keeping these things at focus with many other important things, we’ve actually tried to come up with an app for everyone, which includes particulars, from safety measures to counts, from precautions to helplines, from news updates to stat update, everything at one stop. It is faster than anything else, accurate than anything else, and what not.
Built With
android
api
kotlin
retrofit
room
xml
Try it out
1drv.ms |
9,987 | https://devpost.com/software/hack-at-home-2020 | Hack-At-Home-2020
This is another hackathon repo for our app Pollen
How to create a new expo react native app
https://expo.io/learn
Trying to run
>npm install expo-cli --global
Getting an error.. errno: -13, EACCES,
running
>npm install expo-cli
lets see...
okay that worked
Now running:
>expo init pollen
choosing blank (TypeScript)
Chose to install all dependenices through yarn
Run project
cd pollen
yarn start
gott sdkVersion error ..didn't work
expo start
didnt work
expo-cli start --tunnel
sdkVersion is missing .. guess thats it
Going to make sdkVersion and expo version the same
Installing sdkVersion
go to app.json and add sdkVersion like I have in my app.json
deete your node_modules folder
run
>yarn install
expo-cli start --tunnel
Now you can scan the qr code on your iphone and can start working on App.tsx!
Editing the App
start by editing the app,tsx file
a. you can add more text components or make your own components!
https://reactnative.dev/blog/2018/05/07/using-typescript-with-react-native
Adding Google_maps_react component
yarn add google-maps-react
a. now use GCP to get google maps api key
b. create a .env file and add your api key (don't forget to use .gitignore and include .env there)
Create a MapView component where you will handle all the map api logic
this is a tsx so it may be annoying....
Create a GCP account and enable google maps javascript api with your api key
Adding and handling API Keys
Create a .env file
Built With
javascript
react-native
typescript
Try it out
github.com |
9,987 | https://devpost.com/software/bytesofproblems | Home page
Problem 1 Statement
Problem 1, Syntax Error
Problem 1, Generating diagram from solution
Inspiration
While working on my computer architecture course with a couple teammates we discussed why no one had made a Verilog equivalent of Leetcode where aspiring Computer Engineers can practice implementing solutions at the RTL level. The closest is EDA Playground, where they provide an example based format demonstrating other's work and their weren't many examples listed.
What it does
This website will allow students to design, test, and debug verilog modules to meet problem specifications. Only one official example is listed on the website as of now, but with the modular design developed, expanding off of Yosysjs examples, while utilizing a cleaner text editor UI for the user. The user can access the webpage through any browser including mobile phones.
How I built it
I developed the web application through HTML, CSS, and Javascript. Almost everything is done client side for the user. Yosysjs, a javascript port of Yosys, allows Bytes Of Problems to not rely on servers to parse the user's Verilog modules. Quill is another API I rely on to provide a nice but easy to manage text editor for the user. To locally host the website for quick prototyping I used Python. Additionally I used Git for Version Control and Visual Studio Code as my IDE.
Challenges I ran into
Doing this project on my own, I was worried if I took on more than I can chew, especially since I wasn't adept in Web Development. Even though my project allows for a light backend (no server logic required other than fetching webpages) I was picking up libraries as I went and refreshing on Verilog. I didn't prioritize my time well enough and a lot of additional features I wanted to tack on ended up preventing me from focusing on the things I would need for a minimally viable product demo. I only have three examples at the moment and the website is live (1-3).
Accomplishments that I'm proud of
I'm proud of picking up multiple libraries during this hackathon and staying committed to this project. Around midnight May 2nd I was thinking my project couldn't be feasibly implemented and was thinking about giving up but I thought about how much I procrastinate work that I don't need to do. With deadlines left and right they'll never be a better time for me to work on a project I'm interested about than at a hackathon. So I kept working on it. It's not perfect and it hasn't reached the end goal desired functionality but I made a prototype of the thing I thought would be too hard to make in a weekend.
What I learned
I learned that I have underestimated basic front-end web development. I learned a lot of Javascript through this experience.
What's next for BytesOfProblems
I have several ideas on improving the product. For the presentation I focused on the limited scope of developing a single module that you would test but both Quill and YosysJS are compatible with interacting with multiple files and Verilog modules, I'm hoping to develop problems that force the user to apply the modules provided and see if they can develop bigger modules around that.
Quill is also compatible with syntax highlighting and that would make interacting with the text editor much easier, especially when experiencing errors.
I'm also thinking about allowing better syntax suggestions than what's possible with YosysJS and I may rely on some backend service to take the user's input and determine if it's syntatically correct via Icarus Verilog. In addition, this can prevent the trivialization of some problems as I could easily restrict certain solutions (not allowed to use + operator for implementing the adder, etc.). This was a fun project and I hope it continues to grow.
Lastly, I could definitely neaten up this project and apply web development practices better such as better formatting & isolating HTML,CSS, and JSS into respective files as much as possible instead of monolithic web pages.
Built With
css
html
javascript
quill
wavedrom
yosys
yosysjs
Try it out
saleh99er.github.io |
9,987 | https://devpost.com/software/demo-pyg2rv | Welcome
HomePage
Global data
Dropdown to select a country
visualization
Responsive Visualisation
Graphical view about affected people
Graphical view about deaths
Inspiration
Nowadays, during this pandemic there was an extreme lack of resources such as education, Fun and Covid19 related, and we wanted an app that would allow all users to see where covid cases are most prevalent, how they can get their self well educated and how they’ll utilise their home Quarantine joyous.
What it does
Users can get the details about the number cases effected by covid, Infected people, Recovered and number of deaths caused by covid19. Users will get a beautiful visualization the whole number in all cases.
How we built it
API used:
https://covid19.mathdro.id/api
We built it with various languages such as : React, chartJS, Material UI and more.
Challenges we ran into
Being complete beginners to various technologies we had faced a lot of problems while figuring out every task. All of our team members are from different time zones so it's difficult to work together. Anyway thanks for the session on github it helped us a lot.
Apart from that getting api to work, solving merge conflicts, Understanding the props and hooks in react became difficult to understand as we do not have any prior experience working with react.
Accomplishments that we’re proud of
We tried our best to merge three things at a place (Education,Fun and Updates about covid19) and it does'nt workout well. Even we are proud of our efforts in making the things happen. The website seems useful and able to launch. One of the main thing is almost we all are at the beginner level.
What we learned
We have learned how to structure the complex react web applications. Some use cases of MaterialUI were clearly understood and Building a multi-page web app using react router(thanks for the mentor help)
Always build a scratch first and submit your project before the deadline.
What's next for Demo
Getting real users to use this website and share it, getting more countries onto the live covid19 section, getting real education and fun on board.
Built With
api
chartjs
covidapi
css3
html5
javascript
materialui
react
Try it out
covid19combinato.netlify.app |
9,987 | https://devpost.com/software/twitter-bot-tic-tac-toe | Inspiration
We wanted to take this opportunity to enhance our coding skills, while making something fun in the progress. Doing something new always attracts us. This was the first time we played with many programming phenomenons, such as APIs.
What it does
When you tweet @tictoctaebotfun #haha, a bot will automatically respond. Our goal was to include a link to a ticktacktoe webpage that we would develop ourselves. For now the parts are separate and the bot only gives back a message, which will just have to be incorporated with the website.Another team member worked on the database to keep a history of the games played by the user with mongoDP. We also used AI to make the bot a worthy opponent in tiktoctoe.
We set up an MongoDB atlast cluster to hold user data, including current game and how many games the user has won. Using pymongo we were able to update and read from this data base through a flask web app. The take away from this component was acheiving communication between the app and the database. The next step is to integrate the DB into the app, utilizing real user information.
Challenges I ran into
Because it was our first time using APIs, we ran into challenges on how to interact with it, as well as AI that will make the bot respond appropriately to moves to the player. In addition, we still have the challenge on putting it together, which we will attempt to do even after the hackathon. After working many hours on this project, we ran into challenges submitting it, as our brains were slowly diminishing.
Accomplishments that I'm proud of
Even though the parts are separate, we are really proud of what we accomplished. A day ago if you asked us to use a cloud with the database, work with AI and even APIs, we would not know how to respond. However, now we are more confident in our coding skills. With our teamwork, we were able to put together something we would show our parents.
What I learned
We learned about the twitter API, cloud services, databases, AI, python, javascript, flask, web development, etc.
What's next for Twitter-bot-tic-tac-toe
We are going to try to bring all of our elements together after the hackathon, since we did not do it in time.
Built With
api
javascript
python
twitter
Try it out
github.com |
9,987 | https://devpost.com/software/accountwin | Homepage
Login page
Looking for an accountability buddy
The joy of finding an accountability buddy
Another live demo:
https://youtu.be/yqkOTb953ps
Inspiration
COVID-19 has held us hostage in our own homes. However, it has also been an opportunity to reflect upon ourselves: finally master that skill we've always wanted to, read that book we've been ignoring on our bookshelf, learn that foreign language or coding language. Staying productive isn't as easy as it sounds, but sometimes all you need is that extra motivation. AccounTwin presents you with the opportunity to grab it!
Our team believes that that extra motivation can come from a partner with shared goals. This partner will not only make it fun and competitive but will also introduce accountability, boosting productivity while striving to complete the goals together.
What it does
It keeps us Happy at Home! ;)
Similar to the dating app Tinder, users are mutually matched based on a check/cross mechanism. Other users are displayed anonymously based on their goals. Once matched, the users can then communicate with each other and disclose the essential information to hold each other accountable. The incentive mechanism makes sure a pair successfully completing their goal is rewarded.
How we built it
We used the MERN stack for the web app. Two of our team members were working on the front-end, while the
other two were working on the back-end. We started from scratch; most of the front-end team's work comprised of using React and graphic designing, while using MongoDB, GraphQL and integrations were the tasks of the back-end team.
Challenges we ran into
Not everyone was very familiar with the technology that they were working with, but we pushed our limits and learned in the process! One challenge was connecting MongoDB to our first user data. Since most of the team was relatively new to the MERN stack, routing was both a challenging and pleasant surprise. Moreover, merge conflicts kept arising, which united both sides of the team.
Accomplishments that We're proud of
We are proud of our idea and believe in its potential to improve productivity. Moreover, every member of the team had something new to learn. Regardless of the challenges we ran into, the team communicated effectively and surpassed those challenges.
What We learned
From learning a new development tool from scratch to running into new issues while using familiar tools, every member of the team has experienced it all with React, GraphQL, MongoDB, and routing. For half of the team, it was their first experience working remotely.
What's next for AccounTwin
The team will continue developing AccounTwin and implement or improve on features such as a point-based incentive system and leaderboards.
Built With
ant-design
express.js
mongodb
node.js
react
Try it out
github.com |
9,987 | https://devpost.com/software/zunelectures-luvzwh | Upload your video with your login ID
Interface for transcription and tools for annotations.
Here, you will have the video playing by the side, while you can make annotations on the video transcription.
Registered domain name from domain.com:
https://www.zunelectures.online/
Inspiration
With university moving online recently, we all had to get used to online lectures. Often it is difficult to hear or follow the lecture as well due to the online format. Having the notes would be helpful to follow along with the lecture and enable us to write better notes.
What it does
ZuneLectures transcribes video lectures into text on an interface and allows you to add your own notes and annotations to that transcription while the video is playing, so that you can take better notes on what the video is saying.
The video can be played alongside the notes, and you have tools to use to make highlights and side notes on the transcription of the video, in order to help the user learn more from the video they are watching.
How we built it
We created a React app that uses a node.js backend, stores the video in a mongoDB database, and uses the google cloud speech-to-text library on that video to receive a transcription. Then, we used React to add the additional tools for users to make side notes and annotations on the transcription UI.
Challenges we ran into
Figuring out how to process the audio separately from a video file was a challenge for us initially. Once we figured out how to take the audio from the video file, we were able to use GCP on the audio to transcribe it into a text document. We also ran into some challenges with the backend and ensuring that our app and the Google Cloud Platform were linked. Additionally, we ran into issues with connecting to our MongoDB Atlas service, but we tried switching the provider from GCP to Azure and it seemed to fix the issue.
Accomplishments that we're proud of
We are proud that we managed to overcome sleep and the challenges of working remotely to get a final product running. As well, we are proud we managed to incorporate many technologies in our stack, and that we were able to successfully host on a domain.
What we learned
We learned how to use React over the weekend and how to use the Google Cloud Platform to convert the audio from the video into the transcription. As well, we learned how to work remotely with each other, and how to access video processing through MongoDB.
What's next for ZuneLectures
Increasing our reach to more platforms, including iOS, and Android. Marketing this product to users at different universities across the world, so that they can all benefit from this increased productivity.
Built With
azure
css
google-cloud
html
javascript
mongodb
node.js
react
speech-to-text
Try it out
www.zunelectures.online
github.com |
9,987 | https://devpost.com/software/abracadabra-2cahrj | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for ABRAcadABRA
Hello! We are team ABRAcadAbra from Silicon Valley. Due to the CoronaVirus pandemic, for our project, we decided to focus on spreading awareness in our city and found a way to notify members of our community regarding the dangers and abundance of the virus at frequently visited places. Our goal was to inform people of the conditions of stores around them so that they can keep in mind how to better prepare for those places. We intended to show varying degrees of colors to represent different danger levels, with white representing low/zero risk, and red representing an intense amount of cases that were found in that area. Our intention for the app was to also give users a platform to share their own personal knowledge of cases in different locations. In order to do this, we used the Eclipse IDE, implementing our skills in JavaFX to include images, buttons, and a flurry of colors. We started out unaware of how to move forward, but after gaining momentum we were able to “MAP” out a plan and successfully carry out our project. We incorporated buttons for the users to use every time they heard of any cases in the area. In the background, we kept a running counter that would increase and change based on the data that the users presented. After a certain number of cases, there will automatically be a shift of color on a dot next to the location. This color change will notify the users the danger in that zone/marketplace. Overall the project had its ups and downs, but through teamwork we were able to create a baseline, hopefully expanding and improving the project in the future.
Built With
java
Try it out
bitbucket.org |
9,987 | https://devpost.com/software/medtrack-yp27ir | A sneak peak into our project!
The logo of MedTrack!
Inspiration
India, unlike most other countries, does not have a centralised health system. This is especially crucial in this pandemic. Therefore, we decided to create a centralised health system.
What it does
This is a centralised health system for doctors, patients, data analysts, pharmacists, and pathologists to use. They run on different servers to ensure that the stakeholders get to see only the information they need.
How we built it
We did design, front-end and back-end programming.
Challenges we ran into
Time, time, time! We needed to rush things. We had
a lot
to do!
Accomplishments that we're proud of
We managed to do three servers (doctors, patients and data analyst). We chose to do these three as they are more important in this pandemic right now.
What we learned
We learnt teamwork! Since we are from different time zones (as big as 12 hours apart), we found it hard to agree on a specific time frame to finish the whole project. With good division of workload and cooperating with one another, we managed to come out with the project!
What's next for MedTrack
Finish the pharmacists, and pathologists servers.
Built With
css
design
javascript
ui
ux
Try it out
github.com |
9,987 | https://devpost.com/software/testeasy | Inspiration
Amidst online learning, students are faced with a diverse amount of test formats. Professors require a PDF version of their paper exams, but there are a vast amount of students who do not have access to a printer, and must go through a tedious process to create the desired submittable PDF.
In fact, not every single student can afford to buy their own printers or scanners. As students, some of us have faced this problem ourselves and we understand the pain of not having access to a printer. Some people will rely on their public libraries, school libraries and other means to find a way to access a computer. However, this strategy is no longer feasible and applicable for these students. Due to this pandemic, most students are obliged to stay at home unless it is necessary to leave their safe haven.
Questions we considered:
What would happen to students who do not have access to printers or scanners during this quarantine period?
How can students conveniently merge their files to hand it in to their instructors?
Why are we going to let these students lose their opportunity to educate themselves if we know that we should do something about this problem?
Solution
Students won’t have to use multiple platforms to upload their work anymore!
TestEasy is a centralizing program that students can use to upload their work and answers. Students can merge the multiple files that they have uploaded onto the website. Each file uploaded represents one answer to the question.
This web app would also help reduce paper waste and allow students to work on their assignments without the use of a printer.
Aside from that, TestEasy is an accessible and convenient web app that can be used on the web browser of your device.
How We built it
In the beginning, we brainstormed for the functionality of the web app. We also created a name for the website and we designed the website’s logo. We prototyped the web app using Adobe XD as a group.
After planning out the prototype, we started coding the frontend of the website using HTML, CSS and JavaScript.
Then, we hosted our Github repository to a customized domain address called
http://testeasy.tech/
Node.js was also used for the frontend and the backend of the website. After finishing the backend, we connected the frontend to the backend.
Challenges we ran into
In every path, there are obstacles to be encountered. Our team faced difficulties in communicating with each other due to time zone differences.
We’re from Canada and the United States. With the time zone difference, some of us had to compromise by sacrificing our sleeping schedules.
In addition, sometimes the internet connection sucks. It’s difficult to be working on a project during a call with a bad internet connection.
We were pretty much noobs in web development. JavaScript was quite challenging to learn firsthand. So then, we asked for help from a mentor who helped and guided us through JavaScript.
During the beginning of a hackathon, we had a different solution and problem that we wanted to tackle. Due to the time constraint and extensive technical complexity, we realized that our first idea would approximately take about one week to be finished. As a result, we had to pick a more realistic project after 6 hours of when the hackathon begins.
We’ve implemented a dynamic website on our local host, however we did not have enough time due to the time constraint. Therefore, we did not have any more time to create a new website.
However, we did not want time zone differences, sleep deprivation, bad internet connection and other setbacks to hold us back from participating at TO Hacks 2020.
Accomplishments that we’re proud of
One of the accomplishments that we are proud of is our a domain.com
Flexibility → pivoted and changed our project
Website → I learned node.js
Website looks presentable
What we learned
Making a pop up on a website
Git
Node.js
What's next for TestEasy
Improve the design and layout of the website.
Modify lines of code to be able to parse through more than one question.
Next step, host our server where they support dynamic programming languages..
Built With
HTML/CSS/JS
Template from ColorLib
Node.js
What Makes us different from other competitors
The free PDF platform won't be able to overlay with the image on top of that layer.
Ours is customized to help students submit PDF assignments online
Merging full pages of documents together
Built With
adobe-xd
css3
html
https
javascript
node.js
Try it out
testeasy.tech
github.com |
9,987 | https://devpost.com/software/ocular-aid-kwld3t | Logo
Main graphical interface
Settings Page
Domains
nostrain.space
Inspiration
As society becomes paralyzed by the spread of COVID-19, more and more people find themselves staring at their computer screens while working from home. It is expected that many people will soon find themselves experiencing the harmful effects of prolonged screen time. We want to create an assistant that can recognize the signs of digital eye strain and alert the user.
What it does
Ocular Aid uses computer vision technology to detect symptoms of digital eye strain. In addition to this, face detection is used to calculate a user's total screen time and users are able to set periodic alerts that remind them to rest their eyes.
How we built it
Ocular Aid is centered around the fact that humans blink at a slower rate when fatigued are experiencing eye strain. Using OpenCV, human eyes and their bounding boxes are detected with Haar cascade classifiers. The images are then cropped to only contain a single eye. The images are then classified as either open or closed by a pre-trained Densenet-121 feature extractor that is attached to a fully-connected classifier (trained to 96% accuracy on testing data). Ocular Aid's desktop application was created using C# and its WPF UI framework.
Challenges we ran into
A majority of our group had little to no experience with C# and WPF.
The Haar cascade classifiers had some issues when the user was wearing glasses.
Accomplishments that we're proud of
We hacked out a full blown Windows app that has so many real-world applications in just 24 hours! Not only that, but we learned a lot.
What we learned
We gained lots of experience with C# programming.
We learnt about Haar cascade classifiers.
We became much more experienced with OpenCV and image processing.
What's next for Ocular Aid
We'd like to have Ocular Aid take more factors than just blink frequency into account when detecting eye strain. There are also many possible optimizations that should be made. For example, we work with greyscale images of eyes while the convolutional neural network accepts 3 channel RGB images as inputs. This means that the neural network has more parameters than are actually needed.
Built With
.net
c#
css
html
javascript
opencv
python
pytorch
wpf
Try it out
github.com |
9,987 | https://devpost.com/software/covid-19-geotracker | What it does
Users can log in to their account and log the places they've been to with a date and time period. They can also self report if they are tested positive for Covid-19. The system will notify all the users who have been in the same location, and at the same time with someone's who's been tested positive. We will keep all user information anonymous while making our community a safer place
Built With
api
bootstrap
css
django
firebase
google-cloud
google-cloud-sql
html5
javascript
leaflet.js
mysql
python
react
sql
Try it out
github.com
github.com |
9,987 | https://devpost.com/software/smart-rescue-system-qbemd3 | Inspiration
In the case of Natural or Man-made calamaties, Rescue of people and Animals becomimng a Challenge.
Our Projects detects Humans, Animals, Signs and Gestures. And mimics the Human vision and makes decision as like as Human.
Identifying objects for the purpose of Rescue is a challenge for Human-eye, to avoid this we use Computer Vision.
What it does and How we built it
We created Dataset in which all types of Gestures and Objects are included and Trained. which helps to recognize and detect various Hand Gestures, Signs , objects and persons.
We use Drones inspite of Helicopters and Land-Vehicles for Rescue purpose. And we attached Camera, GSM and GPS modules along with above mentioned DataSet, which detects Real-time objects and Tracks location then sends location along with visuals.
Challenges we ran into
Processing of Bulk of Data
Accomplishments that we're proud of
Solving existing problem around us
What we learned
New Technologies and New Challenges
What's next for Smart Rescue System
We would like to implement this drone that capable of transportation organs in between cities or location.
Built With
drone
jetson-nano
opencv
python
twilio
uav
Try it out
github.com
bnsganesh.blogspot.in |
9,987 | https://devpost.com/software/canvas-calendar-transfer | Inspiration
More students are using online learning platforms to do their work. A popular platform is Canvas. It is often difficult to work with Canvas, as it has its own personal calendar which cannot be exported. This is the same for other applications. With teachers using different applications, it is often difficult for students to keep all the assignments together.
What it does
This program is meant to transfer the calendar activities from Canvas and other learning platforms to a personal calendar (currently Google Calendars), allowing for more convenience for the student user. As of now, the program supports Canvas and Google Classroom.
How I built it and What I learned
I learnt how to UiPath Studio. Through a series of clicks, typing, and OCR, the program transfers the information from Canvas to a personal calendar. I learned a lot about UiPath and WorkFlow. I learnt how use Computer Vision and OCR. I also learned how to create variables
Challenges I ran into
It was difficult to set up UiPath. Once there, I had to learn about the different activities. Also, working with times were difficult and organising my workflow.
What's next for Canvas Calendar Transfer
I hope to make it more robust and less glitchy. It's also very specified to my machine right now; I want to make it more broad and general. Also, I'd like to make it export in bulk and make it compatible with more programs. In addition, I would like to make a nice UI, which I was unable to do due to time restraints.
Built With
rpa
uipath
Try it out
github.com |
9,987 | https://devpost.com/software/socialbutdistant | Inspiration
We were inspired by the challenges of COVID-19 where the only way we can see our friends is on zoom calls. Zoom meetings and google hangouts started to be a very regular thing as the weeks of quarantine went on. All of a sudden we were bombarded with links daily, and on the weekends, hourly. It felt like zoom links were being used up more than face masks and it was hard to keep track of them all.
What it does
Manages currently going video meetings between friends or large groups of people. The entire site is based around the idea of having a single link that never changes but is always your key to your video conferences with your friends. Initially some one would create a group and pass out its link to those they wanted to have video conferences with. Those that received the link to the group could bookmark this link and return to it daily to go directly into the current video conference with out needing the new link for the new day. Anyone with the link has access to the video calls (no need to sign in!). We also optionally allow the group creator to set a secret word so the calls are not joined by randos. Once in the group with the link anybody can create a video conference in this group but that would require creating a login.
How I built it
Made use of mongodb and google cloud platform to make a production ready nodejs express backend using no time at all. We kept the front end simple using bootstrap for css and ejs for template rendering.
Challenges I ran into
Besides a couple of silly bugs that burnt up some time, the hack went over rather smoothly. One issue was waiting for Domain.com to update the DNS entries for our domain so that google cloud would verify it and we could use our new domain (socialbutdistant.online) on google cloud with HTTPS SSL. Unfortunately, at submission time it still hasn't verified. Though, the google sub domain is not that bad either (social-but-distant.ue.r.appspot.com) and has turn key SSL.
Accomplishments that I'm proud of
Having an actual production ready deployed site. Google app engine made it way easier than I was expecting. Usually you are demoing off of local host at the end of a hackathon because you tried but failed or only got it out half way before you got real stuck. Not this time!
Also we were both pretty new to the nodejs and mongo stack but luckily it all just clicked and didnt get stuck too long on anything.
What I learned
Learned and loved ejs templates. It makes it easier to render html files in logical a modular way. Also learned that mongodb and google cloud platform is definitely the move for a hackathon because of its ease of use.
What's next for SocialButDistant
The site still needs a little bit more work to make it a fully functioning site. Would need to add pages and routes for the boring stuff like general settings, reset password etc. But besides that we could probably start making our friends use it right away!
Built With
bootstrap
css
ejs
express.js
font-awesome
google-app-engine
google-cloud
html
javascript
mongodb
node.js
Try it out
social-but-distant.ue.r.appspot.com |
9,987 | https://devpost.com/software/vett | Two players
Play the game now ! :
https://stoic-varahamihira-c8d884.netlify.app/
https://vett.space
Inspiration
Now that most of people are staying at home, they're playing games with each other to connect. We play Dots & Boxes game (called As Poojyam Vettu in our local language Malayalam) when we were all kids. It's still a popular game in school :
https://en.wikipedia.org/wiki/Dots_and_Boxes
How about we make it digital ? That'd be cool ! And we can play it with our friends after this !
We've also been working with WebTorrent and WebRTC P2P connections. It amazes us and inspire to make more P2P cool stuff.
What it does
Implements the Dots and Boxes game in JavaScript. Players can play with their browser itself on the webpage and connect directly. No registration, instant invite and play !
How we built it
P2P connection is established using WebTorrent trackers as signalling server in WebRTC. It's built with a library we're making to leverage WebTorrent trackers to establish P2P connections :
https://github.com/subins2000/p2pt
There's no backend involved ! All happens in the browser ! (except for the WebTorrent trackers)
Challenges we ran into
Building the grid
Building the logic for Dots and Boxes (How'd we know if a box is complete ?)
We're using Vue for the first time. It's good, but there were some problems we faced
Passing data peer to peer to know which box is selected
Accomplishments that we're proud of
It works !
We teammates played together and it worked
What we learned
We got to learn Vue !
We got to learn d3.js
What's next for Vett
Host it better and make it easily playable for everyone
Improve UX
Add invite links
Built With
bulma
javascript
vue
webtorrent
Try it out
stoic-varahamihira-c8d884.netlify.app
github.com |
9,987 | https://devpost.com/software/divoc-e0fywm | Flow chart depicting the working of the whole system.
Homepage of the application
Teacher Login
Student Login
Teacher Dashboard
Student Dashboard
Canvas as a blackboard
Asking question in middle of a lecture
Tab Change alert to gain students attention to the lecture
Inspiration
There is an old saying,
The Show Must Go On
, which kept me thinking and finding out a way to connect teachers and students virtually and allow teachers to take lectures from home and to develop a completely open source and free platform different from the other major paid platforms.
What it does
This website is completely an open source and free tool to use
This website whose link is provided below, allows a teacher to share his / her live screen and audio to all the students connected to meeting by the Meeting ID and Password shared by the teacher.
Also this website has a feature of Canvas, which can be used as a blackboard by the teachers.
Including that, this website also contains a doubtbox where students can type in their doubts or answer to teachers questions while the lecture is going on.
Again this website also has a feature of tab counting, in which, tab change count of every student is shown to the teacher. This will ensure that every student is paying attention to the lecture.
Also, teacher can ask questions in between the lecture, similar to how teacher asks questions in a classroom.
How I built it
1) The main component in building this is the open source tool called WebRTC i.e. Web Real Time Communication. This technology allows screen, webcam and audio sharing between browsers.
2) Secondly Vuetify a very new and modern framework was used for the front end design.
3) Last but not the least NodeJS was used at the backend to write the API's which connect and interact with the MongoDB database.
Challenges I ran into
The hardest part of building this website was to find a
open source
tool to achieve screen and audio sharing. This is because Covid crisis has affected most of the countries economy due to lockdown. Hence, it is of utmost important that schools and colleges do not need to pay for conducting lectures.
Accomplishments that I'm proud of
I am basically proud of developing the complete project from scratch and the thing that anyone who has the will to connect to students and teach them can use it freely.
What I learned
I learned a new technology called WebRTC which I believe that is going to help me more than I expect in future.
What's next for Divoc
Integrating an exam module and allowing teachers to take exams from home.
Built With
mongodb
node.js
vue
webrtc
Try it out
divoc.herokuapp.com |
9,987 | https://devpost.com/software/airnote-xby9z1 | AirNote
Make notes online, collaborate with friends easily!
Today all of us are stuck at our homes dues to the CoronaVirus epidemic and students are stuck too when it comes to attending school and studying. So we thought why not use this opportunity to make an app that can help students and professinals to make notes online and collaborate over the internet to seemlesly work or study in this time.
The Inspiration
I was inspired to make this app when me and a group of freinds wanted to study and make notes together but we were not able to meet at one place due to the COVID - 19 Virus, so i thought of connecting the internet to notes making so we can easily study and make notes together.
The App
Home page So this app basically leverages the power of realtime databases to collaborate for making notes, the app works like this:
Click on the "Start a Session" > This will start a new session and redirect you to a new page.
Now start editing the new document.
To collaborate, just copy the tab address and share it with your freinds to collaborate.
If you want to edit these documents later, just save the link, these documents are saved online.
Technical Part
Technologies use: This app is mainly programmed in Javascript as I am used to this language. I have used Firebase for the real time database.
Other tech used:
HTML
CSS
Libraries used:
Firepad (firepad.io)
Tools Used
Firebase
Building Process The build was easier than I thought and it took me almost 4-5 hours to complete all of this. I used VSCode as my primary editor.
Problem Encountered The main problem was with the Firepad library as it does not have a very rich documentation but I solved this problem by searching online and using tech articles. The UI was also a problem as I am not a very artistic person. :(
What I learnt? I used Firebase for the first time and i absolutely loved it, I also learned a lot about collaboration and UI.
Built With
css3
firebase
html5
javascript
Try it out
github.com
airnote-app.netlify.app |
9,987 | https://devpost.com/software/never-have-i-ever-hackathon-edition | never have I ever title card
Inspiration
Because of quarantine, we're spending a lot of time on social media and playing around with filters. We became curious as to how these filters were made and decided to take a crack at it ourselves!
What it does
Our project lets users play Never Have I Ever! It randomly picks a Never Have I Ever... question and the user has to answer that they have done it or that they haven't done it by tilting their head right or left respectively.
How we built it
We built our filter using Facebook's Spark AR.
Challenges we ran into
No one on the team had experience with Spark AR before so it was a bit of a learning curve learning it in 24 hours. The biggest challenge was with learning what patches there were and how to creatively use them to do what we wanted to do. Spark AR is relatively new so there was less community support. We had to figure it out as we go.
Accomplishments that we're proud of
We love how we were able to integrate the interactive left/right head tilt to our filter.
What we learned
We learned that we could learn a lot in 24 hours! Considering that no one on the team has used Spark AR before, we're really proud of the final product. All it takes is effort and motivation and a supportive team to create something amazing in such a short period of time. We have worked together as a team before so having the initial chemistry helped a lot in understanding the collaboration and work-style of each other.
What's next for Never Have I Ever (Hackathon Edition)
Hopefully in the future we can add more questions as well as add more interactivity such as a countdown timer and supporting multiple faces!
Built With
particle |
9,987 | https://devpost.com/software/who_dis_ar | Guess the person
To the correct caption before time runs out
Inspiration
It all started when my mom met my dad, and they had me. Hi my name is Sim and my life has gotten a little dull, a few minutes of joy, sparks of happiness are now seldom found. Unless I'm harassing my family with AR filters found on Instagram or proving I know more about useless random, like how many fingers Galileo had... The answer is 10.
(I have not verified this information, but it's probably true. I can't find anything saying it not true)
What it does
It's both for entertainment and education. I think its the study tool of the _ italics _ future _ italics _ future _ italics _ future! students, teachers, families, friends, can all learn something new and share it. Or they can provide that their actually much smarter than family and friends.!
How I built it
Spark Ar and I'm trying to get a website up for that sweet best domain name. So we'll say github pages!
Challenges I ran into
I didn't sleep much last night and I'm hours behind you as I live in LA, so I wake up late and started late, then I forgot to have on spark ar. It crashed because I was running fortnight, (not sure how it got opened) and like a million chrome tabs. Thus erasing all my progress, to make matters worst I've near done an AR project before or use Spark AR... So I was also breaking things... #learningcurve and no one, not a single person wanted to work with me. I'm stressed.
Accomplishments that I'm proud of
My quick research, the fact I might just pull this off, I just played up Swift and Gottfried, for brownie points. I still did this less than 24-hour hackathon, even though everyone bailed on me. I made something I have never tried before, ie ar. I haven't fell asleep yet, not OD on caffeine. Staying awake without awake chocolate, but in all seriousness, I'm making something that I'm passionate about, that makes me excited, else I could have just gone to sleep and said screw this.
What I learned
Take the recipe out of the bag, shove it in your pocket or your mom will judge you for spreading $30 on energy drinks.
How to use Spark Ar.
How to keep a straight face and not cry when everything you built is broke. How to learn a new skill in 5 hours,
How to understand the power of the individual, when he is both sleepy and determined.
What's next for who_dis_ar
Hopefully a shoutout from Swift and Gottfried? Also I can see this having amazing results as a tool for learning in school so hopfully more awesome people get added and it goes viral for like 2 hours and I fall asleep, I'm sorry I'm really sleeply. Also if what I wrote makes no sense I have dyslexia and I'm sleeply I sorry. Its my understanding everything is fine.
Also domain is whodisar.tech
Built With
ar
particle
Try it out
whodisar.tech |
9,987 | https://devpost.com/software/pixtools-a-discord-bot-for-hypixel-queues | Pixtools Logo, created out of red hardened clay and quartz blocks and slabs.
Hello! :D
Picture of the Pixtools website
Pixtools - A Discord bot for Hypixel Queues, making life better for Minecrafters
(
GitHub
,
Website Source
)
Inspiration
When our middle-school switched to fully online learning after the COVID-19 pandemic came to our state, our technology club decided to start hosting weekly Minecraft hang-outs on the Hypixel Network. Since then, we've had a lot of fun playing Minecraft together.
The Hypixel Network is a multiplayer Minecraft server with
lots
of minigames. With so many game modes and customization options, it is hard to choose what minigames to play on the calls.
Pixtools is a bot that solves this problem.
What it does
Pixtools is a discord bot that can connect to any discord server. Its main feature is a queuing system for minigames. Members can queue up their favorite minigames, and everyone can play how they want to play.
In addition to its queuing system, the discord bot talks to the Hypixel Network for stats and custom integrations. The bot can show you a list of current minigames on the server, the number of players who are online, and the status of the Hypixel API.
How we built it
The PixTools bot is built in a discord bot library called discord.js, built-in node.js. Our website was designed and built at the same time the bot was being built, with collaboration on design cues and colors.
Challenges we ran into
I figured time constraints would make a developed product that didn't live up to the standards set by ourselves, yet either way in the time we were allotted it looks good! - Arihant
This was the first time I had ever worked with databases before, and it was super confusing! In the end, I didn't have enough time during the hackathon to implement a feature. - James
In the beginning, I wasn't too confident of using databases or ESLint since I've had bad experiences of all, but since I had a good time I was able to push through the challenges. - Damian
Accomplishments that we're proud of
I'm proud of being able to utilize the indefinite amount of time that I have on my hands, plus our bond as a team was strong. - Damian
Getting myself out there to actually join a team instead of going solo sounded like a good change and I'm glad I was able to interact with more parts of the community due to it. - Arihant
I'm really proud that we were all working as a team, even through remote calling and internet snafus. I was worried at the beginning that we wouldn't be able to bond as a team as we would in person, but it was an amazing experience. - James
What we learned
I learned the basics for creating SQL databases, how to write discord bots, and how to use ESLint. - James
I learned how to use SQL dbs and how to configure and utilize ESLint. - Damian
My understanding of video-edition has been pushed plus re-familiarizing myself with JS and Discord Bot Creation - Arihant
What's next for Pixtools
We have a lot of plans for PixTools. Our end-goal is to get Pixtools on 100 discord servers by the end of the year. We want to be able to interface with our Hypixel guild to get player information and notifications when your friends are online. We also want to add a separate queue method where you can vote for your favorite minigame choices, and these would be pushed to the top of the queue. Giving a time range/estimate for each minigame would make our bot feel more polished and would be very helpful to the host. One of the most exciting features would be for the bot to be able to keep track of player's IGNs to help aid the host in inviting them to the party.
Overall, this experience has been really fun! We're super excited to bring Pixtools to the general public in the near future, to help facilitate video game hangouts around the world!
Built With
css3
discordjs
html5
javascript
node.js
Try it out
pixtools.p2phack.club
github.com
github.com |
9,987 | https://devpost.com/software/storeq | the homepage
a single quantum circuit with 6 qubits and initialized information
logo :D god bless online logo generators
StoreQ
StoreQ is a futuristic storage system that works by encoding classical data into quantum circuits.
What it is
StoreQ can upload and retrieve information from a quantum computer using quantum circuits. This entire process is done by simulation of qbits and quantum processes on my personal computer(which is obviously not quantum).
StoreQ can successfully demonstrate encoding normal (classical) data into a quantum simulation, and is also capable of extracting the data's information from the probability distribution of the qBits. It thus demonstrates a one-of-a-kind quantum data storage and backup system.
Why it's special
I think this project has huge extensability beyond what I was able to do in a day. Here's a few:
Because the storage media is via quantum circuits, we can use
entanglement
and
superposition
to represent all the data we accumulate into a single qbit superposition. [sorry for the jargon :(!] What it means is that we can represent a big chunk of information in a single quantum bit, and get the information we want from it by "measuring" it in certain ways.
Because of the API I created to link the backend to the frontend, it can actually be triggered from your google home! Say:
Ok Google, ask StoreQ to launch demonstration
Because it's not all theory and jargon, with nothing to show. I'm proud to have implemented a working concept!
What Inspired me
Art! I initially was thinking of a way to link quantum computing principles with art. Art led to images, and quantum computing led to quantum circuitry. And bam, the idea was born.
How I built it
The frontend is linked with the core backend via a python RESTful API using flask. I did not spend too much time on this, as you can tell, as the core backend is the emphasis. Nevertheless:
The
Upload
button uploads the image (in greyscale, and resized to not overload my computer) to the quantum computer simulation.
The
Download
button downloads the image by reconstructing the data from the probability distributions stored in the qbits.
This is how the backend core works. I'll go in steps:
Storage:
The data used here is an image. Each row's values are firstly one-hot encoded using a LabelBinarizer(a concept borrowed from machine learning), so that the vector amplitude sum is 1. This is a quantum constraint we have to follow.
Each binarized vector is stored in a qbit array via initialization of the values, which is the basis of out quantum circuit.
Recovery
the array of quantum circuits and the
LabelBinarizer
objects are used to recover each vector.
The vectors are converted to numbers.
The numbers form the matrix, which Is the resultant image.
What I learnt
This was really, really, really fun. I learnt so much about how quantum computers work and was able to actually apply it to an idea that interested me. What gives me even more excitement is the possibilities this holds, that I haven't implemented yet due to time constraints.
I learnt about the current quantum computing ecosystem and what tools researchers in the field use. I also got to whip up a quick web frontend for it, which I was also proud to learn and implement.
Also, shoutout to my roommate for explaining to me fundamentals of quantum computing when my brain hurt from trying to understand too much at once. Like, a lot. He's got a bright future.
References:
Fixing weird LabelBinarizer with
a custom class
Quantum Circuits with
qiskit
Quantum physics and art article
Representing qbit states
css font generator
Built With
css
html5
javascript
python
qiskit
quantum
Try it out
github.com |
9,987 | https://devpost.com/software/parteatime-c6xuo7 | Our Front Page!
Our Boba Feed!
Our Welcome Page!
ParTeaTime is a social network that lets you share your experiences with tea anytime, anywhere! Built with React, Bootstrap, Express, and MongoDB at the Hack at Home Hackathon.
Inspiration
We were inspired to make this app because we missed boba tea. Boba tea is not just a drink it is an experience. For those of you who do not know boba tea is Taiwanese tea-based drink, that has many different flavors. Typically there are tapioca pearls or lichee jelly in the tea. Getting boba was was time to enjoy the comfort of a warm drink or to be energized by a cold drink. We would get boba tea or just tea in groups, enjoying each other's company or alone and savor the taste. We missed not only the tea but the Tea Shop experience. People watching, comparing drinks, and hanging with friends. Our inspiration for this hack was to bring a little part of the boba tea experience to those at home. We know these are difficult times, but we hoped to be able to distract or bring comfort to someone who needs it.
What it does
This project brings the ParTea to you! We allows users to upload their own pictures, titles and captions of their own tea or creations. Upload your home-made boba or tea! Post your latest creation and the recipe to go alongside with it! We have a fun BobaQuiz that categorizes users into their favorite tea flavor. Our navigation bar allows users to easily go through pages, and our Feed shows them what other users have shared! Consider this a wholesome social media, perfect for tea lovers everywhere!
How I built it
We used react.js for frontend and MongoDB, express, and node js for the back end. We deployed the backend to Google App Engine and the frontend to Firebase Hosting. The basic structure of the app is that it first uploads pictures to google cloud storage and returns the global link and then the link gets stored on MongoDB with other attributes. That way we can query posts with pictures on the frontend.
Challenges I ran into
We were having trouble uploading pictures to google cloud from the backend since there were many security measures imposed by google cloud service but eventually we got it to post. Using MongoDB as a database was challenging because it was something none of us had done before. Some of our members are very new to coding and so learning and employing new languages was challenging!
Accomplishments that I'm proud of
We are proud of our project! We are proud that we were able to overcome our challenges. We are proud that we were able to use git to collaborate on this virtual hackathon! We are proud of the new things we learned and we're able to apply.
What I learned
One of our team members learned a ton about MongoDB specifically the way it works with node.js and found building Schema was quite engaging. Another member learned React.js and how to apply it to front end.
What's next for ParTeaTime
We will continue to use ParTeaTime as our own personal friend group social media. We will expand it with new features to make it even more fun for all of us!
Built With
google-cloud
html5
javascript
mongodb
npm
react
yarn
Try it out
github.com |
9,987 | https://devpost.com/software/damp-sky-course-schedule-viewer | Usage Example
Inspiration
It is time to register for summer courses in colleges, and many schools provide web apps for their students to easily search & enroll. The idea is to clone the functionality of those web apps from schools but with custom set of courses (curriculum).
What it does
This web application shows the permutation of course schedules given
List of all available courses and their time slots (in JSON format)
List of desired course names (in semicolon separated format)
How I built it
This project is a single page application without any server-client communication. The algorithm to parse user-input and calculate permutations all resides on the front-end.
Express.js only serves static files (HTML & CSS) and javascript files compiled by webpack and typescript.
Result rendering is done in vanilla.js + HTML + CSS. A simple css grid-layout for each time-table.
Challenges I ran into
Donghyeon Kim: Writing styles for the generated time-tables took a long time.
Taehyeon Kim: It was my first time using typescript and its tools.
Accomplishments that I'm proud of
The permutation function and all other data structure manipulating functions are unit-tested fairly well. The team is proud about how we could build well-tested code-base in a very short period of time.
What I learned
Donghyeon Kim: I learned about css-grid-layout and how to use it.
Taehyeon Kim: I learned how to use typescript programming language and its compiler.
What's next for Damp Sky Course Schedule Viewer
The course schedule managers that inspired my team had more advanced functionality like sorting schedules in different priorities (e.g. morning class preference, most time off campus, most days off campus, etc.). It would be fun to implement such functionality in this project in the future.
Built With
express.js
node.js
typescript
webpack
Try it out
damp-sky-9374.herokuapp.com
github.com |
9,987 | https://devpost.com/software/childhood_revived | Childhood_Revived
Add motion control to your favorite childhood games
Motivation
Past couple of weeks under quarantine have been tiring. All physical activities have been greatly reduced (at least for me) so to motivate myself to get at least some form of exercise, I decided to add motion control to some of my childhood favorite games so I could control them by moving my body around instead of using a mouse and keyboard.
What Did I Do and What Did I Use?
I used Adafruit's CircuitPlayground board to get accelerometer reading in X, Y and Z axis. Then, based on the readings, the board acted as a HID device simulating mouse clicks and keyboard keys.
Final Thoughts?
This was a project I had so much fun working on. I got to learn how to simulate mouse and keyboard, I learnt how to understand and map accelerometer data and most importantly, I can now spend the rest of my quarantine playing games while staying healthy.
Built With
circuitplayground
mu
python
Try it out
github.com |
9,987 | https://devpost.com/software/likehome-nge6fb | Google Maps
Inspiration
we build a platform to help people with their emergency shelter needs. |
9,987 | https://devpost.com/software/karaoke-jam-3k05ew | What the user will initially see when arriving at the website.
The mic icon will turn red indicating to the user that it is recording, along with a change in placeholder text.
Inspiration
During this pandemic, our group sought a way to keep ourselves entertained in a creative manner. We missed being together with our group of friends and doing activities. This led us to the creation of Karaoke Jam. With our combined love for rap/hip hop music and Karaoke, we created a website that achieves this end goal.
What it does
Our website has an audio player that has a pre-set beat for a person to use. The user is then able to rap their own lyrics in the provided text boxes that will transliterate the user's speech.
How we built it
Karaoke Jam was built using JavaScript, jQuery, HTML, CSS, and bootstrap. The Web Speech API was used to transliterate the user's speech.
Challenges we ran into
Being able to match text from the API with each respective textbox and mic
Accomplishments that we're proud of
Karaoke Jam being the first hackathon project that all of us in the group have worked on.
Having a finished MVP
What we learned
Learning how to collaborate on a project
Learning soft skills and communication skills in a team setting
Learning how to implement an API
First time using jQuery
What's next for Karaoke Jam
In the future, we would like to add a feature where the user may be able to record the audio of them rapping. This will include the recording of the user rapping and the beat playing in the background that will be downloadable in an audio file.
Built With
bootstrap
css
html
javascript
jquery
webspeechapi
Try it out
github.com
mir7160.github.io |
9,987 | https://devpost.com/software/anitrack-cizoup | Inspiration
What inspired me to create this was the lack of platforms that let me easily record what anime, manga or webtoon I read/watched.
What it does
It is an easy way to record your progress on an anime, manga or webtoon.
How I built it
I used python to create the back end and qt designer, with pyqt5, to create the gui for it. I mainly focused on getting the user to be able to clearly distinguish between the different entertainment.
Challenges I ran into
I wanted to use firebase to act as a cloud database for users to store the data in, instead of storing it locally. However, firebase mainly used java for apps and the libraries I tried had errors with installations.
Accomplishments that I'm proud of
I learned how to make better gui with qt designer.
What I learned
I learned how to make better gui with qt designer.
What's next for AniTrack
Built With
python
Try it out
github.com |
9,987 | https://devpost.com/software/virtual-health-checkup-modelling-of-coronavirus-technoband | Technoband
Software Modelling of Future conditions of CoronaVirus
Inspiration
Daily surge in cases, health conditions of citizens pushed me to work hard
What it does
It predicts the curve of future conditions of any country w.r.t. data set available
How I built it
I built it through software, that have been mentioned.
Challenges I ran into
Lots of challenges, but overcomes and got the results as expected
Accomplishments that I'm proud of
That I did something, which satisfies and help at least one citizen, then the chain will follow up.
What I learned
I learned new softwares, skills
What's next for Virtual Health Checkup|Modelling of CoronaVirus|Technoband
If got success, wanna make it open source.
Built With
arduino
c++
embedded
matlab
python
webex |
9,987 | https://devpost.com/software/facial-expressions-recognition-using-web-camera | Training Data Set of Faces
facial Expressions Recognition 1
Facial Expressions Recognitions 2
facial Expressions Recognition 3
Facial Expressions Recognition 4
Inspiration
Human gesture is the thing which plays a very interesting role in general life application. It can be easily recognize using image processing. Let us consider an example of driver’s gesture who is currently driving the vehicle and it will be quiet useful in case of alerting him when he is in sleepy mood. We can identify the human gesture by observing the movements of eyes, nose, brows, cheeks which may vary with time. The proposed system is introduced for recognizing the expressions by focusing on human face. There were two implementation the approach is based on that is face detection classifier and finding and matching of simple token.
What it does
Performance of employees’ working in MNCs can be monitored using the proposed system. The system will let the Company’s HR to monitor the particular employee’s mood and on that basis able to decides its performance. The proposed system can be very useful in generating pie charts, bar graph, etc upon employee analysis result. Mood will obviously affect the work in positive as well as negative manner and changes in work can be specified with the help of employee analysis result. Using the proposed system the user and admin system for control can also be developed.
Our application will not only detect the users’ mood but also provide the relevant data from database for boosting the mood of user. For example, the system will automatically fetch the songs or jokes from database and send it on the users’ window terminal if user is in sad mood. And also system will able to provide some links to web pages of motivational speech. The data provide by system will boost the mood which make the user to work efficiently and leads to enhancement in performance.
How we built it
Challenges we ran into
The human face plays a prodigious role in the automatic recognition of emotions in the field of human emotion identification and human-computer interaction for real applications such as driver status monitoring, personalized learning, health monitoring, etc. . However, they are not considered dynamic characteristics independent of the subject, so they are not robust enough for the task of recognizing real life with the variation of the subject (human face), the movement of the head and the change of illumination. In this article, we tried to design an automated framework for detecting emotions using facial expression. For human-computer interaction, facial expression is a platform for non-verbal communication. Emotions are actually changing events that are evoked as a result of the driving force. Thus, in the application of real life.
Accomplishments that we're proud of
What we learned
In the field of image processing, it is very interesting to recognize the human gesture for the applications of life in general. For example, it is very useful to observe the gesture of a driver when the person is driving and warning the person when he is sleepy. We can identify human gestures by observing the different movements of eyes, mouth, nose and hands. In this proposed system focuses on the human face to recognize the expression. Many techniques are available to recognize the face. This system presents a simple architecture for recognizing human facial expression. The approach is based on a classifier for detecting faces and searching and matching simple symbols. This approach can be very easily adapted to the system in real time. The system briefly describes the image capture patterns from the webcam, face detection, image processing to recognize gestures and some results,In the field of image processing, it is very interesting to recognize the human gesture for the applications of life in general. For example, it is very useful to observe the gesture of a driver when the person is driving and warning the person when he is sleepy. We can identify human gestures by observing the different movements of eyes, mouth, nose and hands. In this proposed system focuses on the human face to recognize the expression. Many techniques are available to recognize the face. This system presents a simple architecture for recognizing human facial expression. The approach is based on a classifier for detecting faces and searching and matching simple symbols. This approach can be very easily adapted to the system in real time. The system briefly describes the image capture patterns from the webcam, face detection, image processing to recognize gestures and some results.
What's next for Facial Expressions Recognition Using Web Camera
There is one more approach we have adapted i.e. chatbot which is built using artificial intelligence. Using chatting application system let the user to chat with bot and this leads to identifying the users’ mood on the basis of text or speech using text processing. Considering the both approaches the system will be able to provide jokes, songs and links to webpages by recognizing the users’ response. .
Built With
keras
matplotlib
numpy
opencv
pandas
python
tensorflow
Try it out
colab.research.google.com |
9,987 | https://devpost.com/software/inquirehospital | Main Menu Page
Doctor Section
Add Doctor Section
Doctor Added
Patient Section
Add Patient
Patient Added
Medicine Section
Add Medicine
Medicine Added
Laboratory Section
Add Laboratory
Laboratory Added
Facility Section
Add Facility
Facility Added
Staff Section - Nurses List
Example of deleting an entry entering the ID
Staff Section - Security List
Inspiration
Actually this is our first hackathon and we decided to hack for those people who are currently not affected by Covid, We know the affected numbers are large, but the people which are not affected is in majority and taking care of them is also necessary. We all are in Quarantine for now, and in this scenario we can't go outside. So if in case any non-affected person is feeling sick or having any trouble with any disease or is facing some health issues (all apart from covid) they gets confused and below questions comes into their mind:
Where to go?
Will the hospital be able to treat me?
Will the hospital have the specialist I require?
Will the hospital have medicine which I need?
Does the hospital have proper facility?
Does the hospital have required laboratories?
So, helping those who are at home and are non-affected was our main inspiration because many-a-times there comes a news that due to lack of awareness non-affected people goes in a hospital where the Covid patients are there, and due to this they also get affected, so in-spite of breaking the chain, it spreads more!
What it does
What it does is really great to know. It helps answer all the above questions which a non-affected person may have. It lists down following information:
Available Doctors details
Non-Affected Patients details
Types of medicines hospital have
List of Laboratories
Facilities provided by hospital
Staff working in hospital
All the above information makes it easier for a non-affected person to decide that whether he can visit the hospital or not. It also helps the person know that where the doctor can be found, what type of health issues other patients have. Knowing list of medicines really helps when person is visiting hospital just to buy them and not for any treatment, so person can be sure before going.
How we built it
We wrote the entire code in Java. Then we also made a database using SQLite database. After completing everything with base code and database, we started making the UI and for that we used JavaFX. We also added buttons, text fields and labels. We did major formatting after every completion of stage and also solved many runtime errors.
Challenges we ran into
It was a great challenge for us, as we lost most of the time in thinking what we would do, we also registered for the hackathon on the last day. After knowing what we are gonna do, we again took time to answer how would we do it. Then we finally started doing this, initially everything was working fine, but later on we were not satisfied with the output which we were getting, so we again started it from scratch. This really increased our heart beats.
Many of the functionalities which we added in our hack was new for us. So, we need to search across various sites to know more about it and we were not having that much time, still by distributing the work efficiently we tried our best.
After few hours we realized that lines of code has increased from where we started, so finding small errors were again a challenge for us.
Not only this, one of our team mate was not having electricity and therefore wi-fi for almost 1hr that too when last few hours were left, this was a nail biting situation for all of us. Rest we tried our best to take up these challenges and avoided it affecting us.
Accomplishments that we're proud of
As far this is our first hackathon, we are really proud of ourselves that we wrote such a long piece of code overnight, we learned a lot of new things which we don't think would be possible if we would have just lost our hope thinking that we don't know.
This was our first time working with Java Databases. We successfully completed the UI and fixed many compile time errors as well as run-time errors.
In such a less time, we did a lot of things altogether, writing code, surfing things, learning new tech, taking part in workshops, skipped the sleeps, made ourselves proud and many more untold stuffs.
What we learned
We learned a lot and it was great altogether. As a beginner to hackathon we not only learnt how to build our hacks but also learned how to manage our time and cope up with schedules.
Working on a big hack idea we got to know stuffs which we were not knowing already. We got to know more about Java, JavaFX, JDBC Drivers, Java-Database connections & SQLLite. We also learned how to make UI using JavaFx and how to use various buttons, text fields, labels and connecting them with database.
What's next for InquireHospital
We are not stopping here, in such a less time we did a lot and we are sure in coming days we would do great. We are going to add more functionality to it such as below:
Adding other hospitals as well
Creating a larger Database
Developing the UI
Adding Covid patients details as well
Collaborate with others
Do deep research.
Involve hospitals and doctors for better outcome
Make iOS & Android application for users
Publish the app on Play Store/App Store
Help the citizens of the world
This was the story with our hack, it was indeed a great experience!
Thanks for the opportunity.
Stay Safe!
Built With
database
fx
java
jdbcdriver
sqlite
Try it out
github.com |
9,987 | https://devpost.com/software/quarantin-d-5hlk6b | Splash Screen
FAQ screen
AgoraIO Join Call
Video Calling
Info Screen
Inspiration
Sitting at home during this pandemic, I thought it would be handy to have a single source of information about whats going on along with communication with family and friends, most of whom cannot be seen in person right now.
What it does
Quarantin'd is a mobile app, made with react-native, designed with the Android operating system in mind. It has statistics about the pandemic, both global and country-wise, has answers to many frequently answered questions from credible sources (that are quoted), because misinformation becomes all the more rampant during times of crisis. It also has a feature where you can join a video call with your friends, all without the need of creating an account!
How I built it
React-Native:
This is the framework used to create mobile apps using React. I was already familiar with React, so I thought I would it to get work done faster.
Agora-SDK:
I learnt about AgoraIO in one of my previous hackathons, and thought I would use their technology to implement video calling as I was not a complete stranger to it.
thevirustracker API:
To get real-time data about the pandemic
and some good ol' CSS3 styling to get it all looking nice and pretty
Challenges I ran into
The infamous react-native version mismatch, that pops up for no apparent reason, and randomly ends up breaking the video calling feature
Accomplishments that I'm proud of
Building the mobile app as this was my first step into mobile app development, and things went more smoothly than I thought
What I learned
Got very well acquainted with the CSS flexbox layout
Learnt how to set up an Android virtual device
Learnt about mobile app development and the React-Native framework
What's next for Quarantin'd
Fixing current issues with video calling.
Making the UI better and adding animations.
Compiling more FAQ's from multiple sources.
Add a groupchat feature
More details about the pandemic (eg: city-wise breakdown, doubling rate, etc)
Built With
agora.io
axios
css3
javascript
react-native
react-navigation
Try it out
github.com |
9,987 | https://devpost.com/software/desert-shooter-sit83d | Inspiration
So how high are these expectations? IDC predicts that the virtual and augmented reality market will dramatically expand from just over $9 billion last year to $215 billion by 2021. That incredible 118% compound annual growth rate would make VR one of the fastest-growing industries on the planet. As we find ourselves almost halfway through the year, questions still remain about VR and the video gaming industry. Although 2017 didn’t live up to the predictions, VR gaming has learned a lot and come quite a way since it began back in 2014. As brands continue to test and experiment within the VR realm, the arms race will continue to create the best product and experience for consumers.
Thre are many advantages if VR gaming --2.
1.Little/no risk
2.Safe controlled area
3.Realistic scenarios
4.can be done remotely saving time and money
5.Improves retention and recallSimplifies complex problems/situations
6.Suitable for different learning styles
7.Innovative and enjoyable
What it does
Desert shooter is a multiplayer virtual reality game that allows users to play against the computer or their families or both! you can use it on iOS, Android.
How I built it
It is built on Unity3D, on top of Photon PUN and GoogleVR SDK, Echoar. As every multiplayer game requires authentication and sign-ins I integrated with Google Firebase. All the assets are stored in echoar cloud.
Challenges I ran into
Integrating echoar and unity, the development of virtual reality game and networking. As I had to run the game on my phone to record the gameplay the output video on youtube is a bit blurry.
Accomplishments that I'm proud of
This is my second cloud-based game, which reduced the size of the application enormously. from Scratch. I developed a few of the UI elements and game assets. I feel the User-Interface of the app and the effects are pretty cool.
What I learned
VR development is real FUN!! and there are a lot of API and SDK that unity supports, development of cloud-based applications. Personally I feel echoar unity SDK helped us a lot, as the assets don't need to be there in the scenes rather show up on calling through the key.
What's next for Desert-Shooter
I want this game to be a cross-platform game and hence our next step would be to make the web version of it, plan on releasing it to production so that users can have an immersive experience of modern gaming techniques.
Built With
c#
firebase
googlevr
unity
Try it out
github.com |
9,987 | https://devpost.com/software/empowered-to-empower | Website
Inspiration
I have been hosting AI sessions every Tuesday evening and wanted to find a fun way to invite friends for the events.
What it does
I built an AR filter for our AITuesday Event.
How I built it
Built a website using HTML/CSS for our homepage, named empoweredtoempower.tech as we are a group of students tasked to empower others to do more. Additionally, added an AR filter for an event.
Challenges I ran into
It's my first time using AR, so impostor syndrome has been pretty strong. My internet has been also slow.
Accomplishments that I'm proud of
I've made a filter, though not perfect, but a filter nonetheless.
What I learned
Making a filter.
What's next for Empowered to Empower
Share with my friends and make a improvements and add more filters.
Built With
css
html
Try it out
inncreator.github.io
github.com
empoweredtoempower.tech |
9,987 | https://devpost.com/software/asha-hgt9kb | NA
Built With
adobe-illustrator
firebase
flutter
google-web-speech-api
ibm-watson |
9,987 | https://devpost.com/software/hm-burger-website-design | Logo
Main Home screen
Our Story Page
Menu Section
Gallary
Reviews section
HM Burger is simply a restaurant website with sleek UI and interactions. I have built it up to sharpen my design skills during this Covid19 pandemic lockdown.
What I learned
Mostly made use of
flexbox
and improved my skills in that
learned bootstrap, AOS, animate.css to make the transitions look amazing.
Made it completely responsive so it also looks amazing in smaller devices
Built With
animate.cc
aos
bootstrap
css
html
javascript
Try it out
harshmauny.github.io
github.com |
9,987 | https://devpost.com/software/text-encryption-and-dencryption-whag9c | Homepage of website
Encryption Page with input message and receivers email.
Email received which the encoded string in binary format.
Decryption of message using the binary encoded string received by user.
Inspiration
In today's world when Messaging is used as a mode of communication, Its really necessary that our message is secured and only the person who is sending the message and the person who is receiving the message can only read the message. So to come up we have created the project of Text Encryption and Decryption to protect the message using encryption.
What it does
The Sender who wants to send the message has to open the encrypt tab of our website and writes his/her message correctly with the user email address which generated a request which is send to servlet. The servlet has the algorithm which we have written for huffman encoding and converts the message to binary format and also sends a email of the binary encoded string to the receiver. After the reciever recieves the email he has to copy and paste the binary string to the decrypt text tab and click on get message to read the message that is send by the user. Only those who have access to the website can decode the message and it cannot be read by any machine or human other than our website.
How we built it
For building the Frontend we have used Bootstrap and SCSS. While for the Backend we have used the Java Servlet which is a java library that supports web frameworks. For encryption and decryption of the message we have used Huffman Algorithm but while implementing we saw that this algorithm only encrypts the letters and not the complete sentence and also the spaces are not counted so we have customized the algorithm and now it uses the dataset to encrypt the message.
Challenges we ran into
As previously mentioned that Huffman algorithm is designed to encode the letters only and return the binary format of those encoded letters,
example :- for a input "aabba" it returns "a:001" , "b:1001" , "c:1010"
.
We have used our four to five hours to solve the problem so that it encrypt the complete sentence and finally after continuous efforts we are able to encode the complete message and append a string for the encoded letters binary format. Again we faced an issue that the algorithm not accepting the white spaces and to overcome this we have used " | " operator for the white spaces.
Accomplishments that we're proud of
In the course of this Hackathon, We managed to code and implement sucessfully some languages - Java and Javascript with HTML and CSS. We used the data structures like Heap, Hashmap to store the datasets and successfully map the key value pairs. We were able to successfully implement and get the positive results in all these languages.
What we learned
We learned new methods of coding and some of the data structures like heap and hashmap. We also get a clear idea of how the java can be used with other languages like javascript and html. We have learned how to setup Glassfish server in Netbeans IDE.
What's next for Text Encryption and Dencryption
We hope to perfect our code so that it could be used in real life implementation for sending the message securely. And use some of the well known alogrithms like RSA in future.
Built With
bootstrap
css3
html5
huffman-algorithm
java
javascript
scss
servlet
Try it out
github.com |
9,987 | https://devpost.com/software/emoji-finder-2qxgcn | Full Emoji List also support click emoji to copy it
Search Feature
Emoji-Finder
Simple React App To Find Emoji's
To View This Project Go To:-
https://kamal-walia.github.io/Emoji-Finder
Built With
css
html
javascript
react
Try it out
github.com |
9,987 | https://devpost.com/software/eye-tracking-in-sparkar | Inspiration
I got inspired for the idea from
Marvel movies
and in specific Iron Man that how he projects the screen into his eye and E.D.I.T.H of course.
What it does
It is an Instagram FIlter, When the user taps on the screen it tracks the left eye and projects a 3D-model over it.
How I built it
I build it using SparkAR and Patch editor in the SparkAR Player and also by viewing several online videos and content as to how to achieve it.
Challenges I ran into
The most challenging stuff I thought was to model the whole 3D structure and to position it to make sure it looks amazing.
Accomplishments that I'm proud of
I learned more about SparkAR in this journey and learned to use Blender which is one of the biggest accomplishments I am proud of.
What I learned
I learned to model on Blender and its shaders and modifiers.
What's next for Eye-Tracking in SparkAR
I am now trying to add more vivid features like putting some kind of information on the eye like COVID tracking on the eye so that the user can use the filter to see the current updates regarding the COVID-19 in different parts of the country.
Built With
blender
sparkar |
9,987 | https://devpost.com/software/deep-learning-drone-delivery-system | Results of our CNN-LSTM
Accuracy after training our model on 25 epochs
MSE of our CNN-LSTM
How we preprocessed data for our model
Data preprocessing
Picture of Drone
Inspiration:
The COVID-19 pandemic has caused mass panic and is leaving everyone paranoid. In a time like this, simply leaving the house leads to a high risk of contracting a fatal disease. Survival at home is also not easy: buying groceries is frightening and online ordered necessities take ages to arrive. The current delivery system still requires a ton of human contact and is not 100% virus free. All of these issues are causing a ton of paranoia regarding how people are going to keep their necessity supply stable. We wanted to find a solution that garners both efficiency and safety. Because of this, drones came into the picture(especially since one of our group members already had a drone with a camera). Drone delivery is not only efficient and safe, but also eco friendly and can reduce traffic congestion. Although there are already existing drone delivery companies, current drone navigation systems are neither robust or adaptable due to their heavy dependence on external sensors such as depth or infrared. Because of this, we wanted to create a completely autonomous and robust drone delivery system with image navigation that can easily be used in the market without supervision. In a dire time like now, a project like this could be monumentally applied to bring social wellbeing on a grand scale.
What it does:
Our project contains two parts. The first part is a deep learning algorithm that allows the drone to navigate images taken with a camera which is a novel and robust navigation technique that has never been implemented before. The second portion is actually implementing this algorithm into a delivery system with firebase and a ios ecommerce application.
Using deep learning and computer vision, we were able to train a drone to navigate by itself in crowded city streets. Our model has extremely high accuracy and can safely detect and allow the drone to navigate around any obstacles in the drone’s surroundings. We were also able to create an app that compliments the drone. The drone is integrated into this app through firebase and is the medium in which deliveries are made. The app essentially serves as an ecommerce platform that allows companies to post their different products for sale; meanwhile, customers are able to purchase these products and the experience is similar to that of shopping in actual stores. In addition, the users of the app can track the drone’s gps location of their deliveries.
How I built it:
To implement autonomous flight and allow drones to deliver packages to people swiftly, we took a machine learning approach and created a set of novel math formulas and deep learning models that focused on imitating two key aspects of driving: speed and steering. For our steering model, we first used gaussian blurring, filtering, and kernel-based edge detection techniques to preprocess the images we obtain from the drone's built-in camera. We then coded a CNN-LSTM model to predict both the steering angle of the drone. The model uses a convolutional neural network as a dimensionality reduction algorithm to output a feature vector representative of the camera image, which is then fed into a long short-term memory model. The LSTM model learns time-sensitive data (i.e. video feed) to account for spatial and temporal changes, such as that of cars and walking pedestrians. Due to the nature of predicted angles (i.e. wraparound), our LSTM outputs sine and cosine values, which we use to derive our angle to steer. As for the speed model, since we cannot perform depth perception to find the exact distances obstacles are from our drone with only one camera, we used an object detection algorithm to draw bounding boxes around all possible obstacles in an image. Then, using our novel math formulas, we define a two-dimensional probability map to map each pixel from a bounding box to a probability of collision and use Fubini's theorem to integrate and sum over the boxes. The final output is the probability of collision, which we can robustly predict in a completely unsupervised fashion.
We built the app using an Xcode engine with the language swift. Much of our app is built off of creating a Table View, and customized cell with proper constraints to display an appropriate ordering of listings. A large part of our app was created with the Firebase Database and Storage, which acts as a remote server where we stored our data. The Firebase authentication also allowed us to enable customers and companies to create their own personal accounts. For order tracking in the app, we were able to transfer the drone’s location to the firebase and ultimately display it's coordinates on the app using a python script.
Challenges:
The major challenge we faced is runtime. After compiling and running all our models and scripts, we had a runtime of roughly 120 seconds. Obviously, a runtime this long would not allow our program to be applicable in real life. Before we used the MobileNet CNN in our speed model, we started off with another object detection CNN called YOLOv3. We sourced most of the runtime to YOLOv3’s image labeling method, which sacrificed runtime in order to increase the accuracy of predicting and labeling exactly what an object was. However, this level of accuracy was not needed for our project, for example crashing into a tree or a car results in the same thing: failure. YOLOv3 also required a non-maximal suppression algorithm which ran in O(n^3). After switching to MobileNet and performing many math optimizations in our speed and steering models, we were able to get the runtime down to 0.29 seconds as a lower bound and 1.03 as an upper bound. The average runtime was 0.66 seconds and the standard deviation was 0.18 based on 150 trials. This meant that we increased our efficiency by more than 160 times.
Accomplishments:
We are proud of creating a working, intelligent system to solve a huge problem the world is facing. Although the system definitely has its limitations, it has proven to be adaptable and relatively robust, which is a huge accomplishment given the limitations of our dataset and computational capabilities. We are also proud of our probability of collision model because we were able to create a relatively robust, adaptable model with no training data.
We were also proud how we were able to create an app that compliments the drone. We were able to create a user-friendly app that is practical, efficient and visually pleasing for both customers and companies. We were also extremely proud of the overall integration of our drone with firebase. It is amazing how we were able to completely connect our drone with a full functioning app and have a project that could as of now, instantly be implemented in the marketplace.
What I learned:
Doing this project was one of the most fun and knowledgeable experiences that we have ever done. Before starting, we did not have much experience with connecting hardware to software. We never imagined it to be that hard just to upload our program onto a drone, but despite all the failed attempts and challenges we faced, we were able to successfully do it. We learned and grasped the basics of integrating software with hardware, and also the difficulty behind it. In addition, through this project, we also gained a lot more experience working with CNN’s. We learnt how different preprocessing, normalization, and post processing methods affect the robustness and complexity of our model. We also learnt to care about time complexity, as it made a huge difference in our project.
Whats Next:
A self-flying drone is applicable in nearly an unlimited amount of applications. We propose to use our drones, in addition to autonomous delivery systems, for conservation, data gathering, natural disaster relief, and emergency medical assistance. For conservation, our drone could be implemented to gather data on animals by tracking them in their habitat without human interference. As for natural disaster relief, drones could scout and take risks that volunteers are unable to, due to debris and unstable infrastructure. We hope that our drone navigation program will be useful for many future applications.
We believe that there are still a few things that we can do to further improve upon our project. To further decrease runtime, we believe using GPU acceleration or a better computer will allow the program to run even faster. This then would allow the drone to fly faster, increasing its usefulness. In addition, training the model on a larger and more varied dataset would improve the drone’s flying and adaptability, making it applicable in more situations. With our current program, if you want the drone to work in another environment all you need to do is just find a dataset for that environment.
As for the app, other than polishing it and making a script that tells the drone to fly back, we think our delivery system is ready to go and can be given to companies for their usage with customers. Companies would have to purchase their own drones and upload our algorithm but other than that, the process is extremely easy and practical.
Built With
drone
firebase
keras
opencv
python
swift
tensorflow
xcode
Try it out
github.com |
9,987 | https://devpost.com/software/easy-track-online | Overview
Web Address:
http://trackyourfinances.online
or
https://myfinancesapp.herokuapp.com/
NOTE: trackyourfinances.online is my submission for the best domain registered with Domain.com
Inspiration
With the current COVID-19 situation going on, many people are under unfortunate circumstances where they may not have the same income as they had before. Many people are in a situation where they need a simple, yet powerful, solution that allows them to keep track of their personal finances and understand their financial health.
That's why I created this web application, which will help people to better track their finances during this pandemic.
What it does
This app has a
fully-featured user authentication system
, where all user accounts are
anonymous
, thus
ensuring their privacy
.
Users may
add, update, and delete monthly expenses
as they come in, which are subtracted from their total income.
Moreover, users have a module set up to allow them to
set and track the progress of their financial goals
. This allows them to see their own progress, and help them strive to improve their financial health.
Lastly, the app calculates
what percentage of income the user has left
after expenses, and uses this figure to
track the financial health of the user
. This allows them to understand if they are doing well or if they need to evaluate their spending habits.
How I built it
For front-end I used:
*HTML
*CSS
*Bootstrap
For back-end I used:
*Flask (python)
*SQLite
Challenges I ran into
I was unable to implement the functionality for the app to automatically detect a new month, so the app currently requires to enter a new month by deleting expenses manually.
Accomplishments that I'm proud of
I'm proud of the slender design of the front-end, and the mobile responsiveness of it.
Moreover, I am proud of the potential impact this may have on those who decide to use it, as it may help them to become more financially literate.
What I learned
I gained new skills working with Flask. I have become more comfortable working with back-end and databases during this hackathon. Moreover, I believe I have been improving my front-end coding skills, and have learned several new Bootstrap classes that are useful in elegant design.
What's next for Easy Track Online
I plan on extending the functionality of it by adding auto-detection of new months, increased profile customization, and new features. The good thing about this app is all the different functions are modular, and can be easily inserted onto the home page dashboard. Therefore, extending the functionality will be very possible.
Contact Info
Jeremie Bornais
[email protected]
Discord Tag: jere_mie#9432
Best Domain Registered With Domain.com
I would like to submit the domain trackyourfinances.online for the Domain.com prize
I believe it is a clever use of the .online TLD, and very relevant to this project.
DISCLAIMER: Some of the code for forms, user registration, and the overall project configuration may be similar to projects I have done in the past.
Built With
bootstrap
css
flask
heroku
html
html5
sqlite
Try it out
myfinancesapp.herokuapp.com
trackyourfinances.online
github.com |
9,987 | https://devpost.com/software/blogspot | Logo
Home Page
Post Page - Dark Mode
Post Page - Normal Mode
Due to COVID-19, everyone is stuck at home. I thought to utilize this time to improve my skills. So, started off with improving my UI designing skills. The UI Designing is important because that is how people interact with your product to achieve their needs and goals.
BLOGS
BLOGS is similar to other blogging platforms - where you can post, share and comments.
Checkout BLOGS on Github:
https://github.com/letscodedev/blogspot
Built With
animate.css
bootstrap
css
html
javascript
jquery
Try it out
letscodedev.github.io |
9,987 | https://devpost.com/software/routine-ly | Still kinda new to Figma but I think it looks OK
Home screen
Creating a task
List of tasks
Editing a task
Inspiration
Quarantine = no routine, making it harder to get things done and stay productive. Routine.ly is a simple todo-list app that encourages you to get things done!
How I built it
Designed the UI using
Figma
, then used
Flutter
to code it. Also used
Firebase
as a back-end to store and retrieve tasks.
Challenges/Accomplishments
I worked alone on this project so the time constraint was difficult to work with, so I scaled the project down to a basic CRUD app to make it manageable within the time. However, I think I did pretty well in starting a project and creating a framework for a more complex application in the future.
What I learned
Flutter. Is. Cool!
(also F i r e b a s e I guess)
What's next for routine.ly
Integration with Google Calendar to schedule times for tasks, and split up large tasks into daily sections.
Built With
firebase
flutter
Try it out
github.com |
9,987 | https://devpost.com/software/blockchange | Inspiration
The inspiration for Blockchange came from the large increase in usage that I've seen of the platform Change.org since the beginning of the Covid-19 crisis. Many Change.org petitions have been created to get colleges to convert to pass fail grading, large organizations to give back massive loans intended for small businesses, and much more. I realized, however, that Change.org has put itself into a place of a lot of power by centralizing all these data on the people that are completing these petitions. I had never worked with blockchain before but I began to wonder if there would be some kind of way to decentralize all of these people who were creating and completing these petitions work into a public ledger that would be easily accessible by anyone that fills them out (which is how it really ought to be). Thus, Blockchange was born.
What it does
Blockchange allows people to create petitions and sign petitions that they believe in much the same as Change.org. The way it differs is that these petitions and the people that create / sign them are stored in a blockchain. The use of blockchain over a centralized database allows for a great increase in transparency. Blockchain is ideal for voting systems like this since it has a high fault tolerance and the fact that everyone has a copy of the public ledger is ideal for a petition that people will be signing.
How I built it
I used Truffle and the Solidity programming language to interact with the Ethereum Virtual Machine. This allowed me to build the smart contracts for interacting with the blockchain. I then deployed to a personal Ethereum Blockchain using Ganache. The frontend of the application is written in React. I also used Canva / Figma for designs and for making the logo.
Challenges I ran into
I had never worked with blockchain technologies at all before this project and had only started learning about what the blockchain is, its advantages, etc. about a week prior when I got the idea for this project. This steep learning curve coupled with the 24 hour time limit made it very difficult.
Accomplishments that I'm proud of
I feel like I gained a grasp of a completely new technology and learned so much in a short amount of time. I also was solo on this project and that definitely made it hard for me to stay committed since I didn't have the encouragement and collaboration of teammates but to spite this I still managed to create something that I am proud of in the 24 hour time span.
What I learned
I learned an immense amount about blockchain, blockchain technologies, and definitely improved my time management abilities as since I was working alone the work had to be planned out carefully.
What's next for Blockchange
I would love to look deeper into how this could actually be applied further. For the purposes of this hackathon this was deployed to a personal Ethereum Blockchain using Ganache but I would love to see what steps could be taken to make this a deployed alternative to change.org.
Built With
blockchain
css3
ethereum
html5
javascript
node.js
react
solidity
truffle
Try it out
github.com |
9,987 | https://devpost.com/software/land-mark | Starting screen
Search for a city
Add a city
City details
City details
Add a picture and a comment
Add as many cities as you want!
Inspiration
Currently we're all stuck inside waiting for this quarantine to be over. Everyone is bored and wants to get out of their house. I believe this is a good time to look back on all the exciting moments in our lives to forget about the fact that we can't leave the house. So my app idea was to allow people to create their own map that displays all the cities they've visited in their life. Every city in the world has something special about it and everyone has a memory of all the cities they've traveled to. Creating a collection of visited cities would be a really nice way for someone to keep track of where they've been and attach a memory to it. Users can continuously add cities to their list and create a customization screen, where they can add the date they visited, a picture, and comments. The end result is a world map with a bunch of markers everywhere. Everyone's map will be different and heavily personalized. Users will come back to the app whenever they want to re-visit a memory.
What it does
Users are initially greeted with a world map, where they can easily move around and zoom in on. If they tap the "plus" button in the top right, they are able to search for a city. Once they type in a city, the map zooms in on that city and they are given the option to add it to their list. If they say yes, they are brought to a new screen for their specific city. Here they can view the exact location, add the date they visited it, add a picture, and add comments. This screen is meant for the user to customize to their liking, as it gets saved and they can always come back to it and view their memory.
The user can add as many cities as they want to their list. The app supports every single city in the world and displays all of the added cities as markers on the map. Once they populate their map, they can come back to it any time and keep adding more. They can also view it on any apple device as it backs up to iCloud.
How I built it
I built the app entirely using Xcode/Swift. The UI is built with UIKit, and the MapKit framework is used to display the map. To get the city data, I parsed a large JSON file from the internet. I also saved all user data to iCloud so that nothing gets deleted when they close the app or use a different Apple Device.
Challenges I ran into
The biggest challenge I ran into was saving all the user data so that it wouldn't get deleted when they close the app. I needed to save all the cities the user adds to their list, and then all the data associated with them. The user is able to edit this data whenever they want so that makes it even more difficult to save data efficiently. I ended up having to refactor a lot of code to get it working but in the end it was worth it. Not only does the data save to the user's device, it also saves to their iCloud account so that they can easily view their map on another apple device.
I also had problems with the search functionality. Apple's built in search returns anything it can find, from countries, to pizza shops. I only wanted it to return cities so I needed to come up with a good solution. I decided to find a city dataset and then only allow the user to search for items within that dataset. I did it in such a way that the search query still goes through Apple's API and then goes through the dataset filter I made. This meant that people could have typos in their query and it would still return the correct result. I was very satisfied with this result.
Accomplishments that I'm proud of
I am proud of the fact that the app works really well and has no bugs (that I know of). I got stuck many times trying to fix bugs or implement features and I'm glad I pushed through and managed to get everything working to my liking.
What I learned
I learned quite a bit while developing this app. I learned how to use the MapKit framework, how to parse JSON with Swift, how to save user data to iCloud, how to build a complex TableView, and how to add search functionality in apps. Overall it was an amazing experience and I feel like a much better iOS Developer now.
What's next for Land Mark
A feature that I wanted to add but didn't have time for, would be sharing capabilities. I want users to be able to share their maps full of their experiences/memories with their friends/family so that they can view it on their device. This would be a great addition to the app and would definitely be a challenge to implement. I would also like to add an On-boarding screen to inform the user what the app is all about and how to use it. I will be submitting this to the App Store very soon.
Built With
json
mapkit
swift
Try it out
github.com |
9,987 | https://devpost.com/software/the-smart-helpline | Inspiration
To help the doctors to overcome their difficulty
What it does
It acts as a platform to bridge the doctors,police and municipality with the public
How I built it
I built my application using flutter
Challenges I ran into
Doing what I like
Accomplishments that I'm proud of
I'm often happy for what I have did
What I learned
How to develop more applications
What's next for The Smart helpline
More ways to help people as a engineer
Built With
flutter
google-web-speech-api
natural-language-processing
Try it out
docs.google.com |
9,987 | https://devpost.com/software/community-hero | Messenger chatbot
SMS chatbot
Webapp
Delivery portal
Inspiration
When the pandemic began, it was clear that the elderly would suffer disproportionately - not only are they the age group most at risk from the virus, but they are also the likeliest to be living alone, meaning they have to do their own groceries. Of course, supermarkets have shown to be transmission vectors for the virus, making a simple essential act like picking up weekly groceries something that can lead to catching Coronavirus. And this is of course, a much more dangerous move for the elderly, since the mortality rate increases exponentially with age. One could suggest online ordering as a solution, however this age group is likelier to be tech-illiterate, and hence conventional online ordering is less likely to draw them in, due to their unfamiliarity with it combined with confusing site layouts. That is why we built CommunityHero - a truly accessible online ordering solution, tailored to keep the most vulnerable parts of our population safe.
What we do
CommunityHero provides 3 main interfaces for online ordering - firstly, a smart Messenger chatbot which provides product images, helpful tips and uses Machine Learning to help parse each customer's shopping list. Secondly, we built an SMS interface for those not on Facebook/not on the Internet, with approximately the same functionality as the Messenger chatbot. Lastly, we built a
webapp
which works like traditional online ordering sites, albeit with a simpler layout.
Messenger
SMS
All of these interfaces, each one designed to tailor to the needs of our target customer (an elderly person with little to no tech literacy), link to a
web interface showing all pending orders
. This is accessible by our “Community Heroes” - part of our volunteer delivery network. We currently plan to partner with local NGOs in every area we serve to help find a large number of Community Heroes quickly and efficiently.
Community Heroes can log in on a web interface and see all of the orders available on a map and choose which ones to order. Upon choosing an order, they can claim it, so that they can see the full order details. The system will also then send a message to the original customer saying that their order has been claimed by someone. In order to further decrease the chances of infection between the community hero and the customer, we present the “Community Hero” a set of guidelines that they need to follow to ensure the process is safe, including provisions to wear PPE.
How we built it
We use an Android phone as the SMS gateway, which receives incoming SMS messages and forwards them to the backend, built using Django and hosted on Heroku. We use NLP to search for the products requested in the products available, and choose the best combination that minimizes price and distance. The searching is done by splitting all strings into N-grams and using the Jaccard Distance. Further, based on what products are most commonly in carts, we made a product weighting system, based on how popular each item is. For example, even though Soy milk and Skimmed milk would both be results to the search 'milk', the user is more likely to want skimmed milk. This is done automatically based on the collective current shopping carts of all users. This allows for trends to remain as trends (for example, hopefully, disinfectant products will not be so common in 2-3 years)
Our chatbots are stateful - this means that the context from the previous messages is understood in order to find the state in which the conversation is. For example, the bot knows that the user is still completing his/her profile, so it understands that an incoming message containing 'Yes' is not something to search for, but rather an answer to a previous question.
The frontend communicates with the backend RESTfully.
Demo
Instructions on how to run the demo are available
here
Challenges
It was difficult to find a way to find a viable business model, since it is based on human interest for others, and volunteering, which may be difficult to promote. However, since we are in the middle of a global pandemic, it is a lot easier to get people to dedicate their time than before.
Currently, our business model relies on attaining revenue through small commissions stores will pay whenever a customer shops with them through our platform. This revenue will be used to help provide PPE (masks & gloves) and support to our Community Heroes, who we will source via local NGOs. The benefit to this approach is it allows us to scale faster than usual - NGOs are often more efficient when it comes to getting together a large number of volunteers, and we are certain that for a
A technical difficulty we had was the fact that for Facebook Messenger to accept our messages / to send us messages, it needed to be hosted over HTTPS. Therefore, it couldn't be tested locally without a domain name. Hence, every time we changed something in the code we needed to push to GitHub, and redeploy the whole backend, which took 2-3 minutes every time, no matter how small of a change we wanted to make.
Another thing that we were anxious about is the Facebook App Review. This is a process needed in order to make an app that uses the Facebook API live. However, this is a manual process that may take up to 2 weeks! We didn't expect the process to finish in time for the demo, so we also implemented a process through which anyone can send us their Facebook profile, and they would be added as testers. Fortunately, the process finished a few hours before the submission deadline, so even though it was already implemented, we don't need to use this long (and manual) process of adding testers, since our chatbot is now public.
Accomplishments
Awarded at the Greek Crowdpolicy Antivirus Hackathon amongst 50 teams!
We were able to make significant progress on the chatbot within the past 2 days.The idea was refined and revised multiple times, and we feel that the idea itself is quite unique - we certainly haven’t been able to find an equivalent alternative on the market. Even though there are a few similar ideas in the hackathon, with the same concept of crowd-delivery, we believe that our range of ordering methods adds significant value to our product.
What we learned
Make REST APIs
Use the FB Messenger API to send messages
Go through the FB verification process
Use, manage and launch WordPress WooCommerce site
Hosting on Heroku
Using NLP libraries
What's next
Partner with supermarkets
Build in navigation into deliverers web interface, make it into a cross-platform mobile app
Partner with local NGO to source first CommunityHeroes and bring product live ASAP
Who we are
We are a team of high school students from Cyprus interested in the social good and try to contribute, in whatever way we can, to as many people possible in these dire times we live
Any feedback is welcome here as a comment, or shoot us an email at
[email protected]
:)
Built With
android
bootstrap
django
facebook-messenger
heroku
java
jquery
leaflet.js
natural-language-processing
php
postgresql
python
rest
sms
woocommerce
wordpress
Try it out
github.com
communityhero.live |
9,987 | https://devpost.com/software/fishtank-business | Home page
profile
Add project page
landing page
website login
website dashboard/ home page
view project page
view project demo
website add project page
Inspiration
Heard of shark tank? It's essentially that, but smaller. Many of us who have attended hackathons are hoping to turn their projects into real-life startups, but this often hard without funding. We are hoping that our platform helps these students and other developers to put their projects out there for investors to see and help make these small projects a reality!
What it does
FishTank is a platform where students and other developers can post their projects online in the hope of finding investors to fund their startup. This platform can also serve as a way for people to get ideas based on other's projects.
How we built it
Our team was composed of 4 members, so we decided that two work on the website platform and 2 work on the mobile app. We used the flutter framework with google firebase (and other API's) to create the mobile app. We used JQuery, JavaScript, HTML, CSS, Firebase, and BlockStack to create our website version.
Challenges we ran into
Some challenges that we ran into were that there was no support for block stack for flutter, so we had to improvise and use another service for the mobile app. Getting to use block stack on the website was somewhat of a challenge, but we got it at the end.
Accomplishments that we're proud of
We are very proud that we were able to create a functioning mobile app and a website in this short period of time that we had to create this project.
What we learned
I have learned that creating our project on both platforms is a very hard task in a short timespan, maybe next time we will go for only one of them.
What's next for FishTank-Business
We hope to make FishTank an actual successful startup one day in the future, it would be really nice to see our creation come to life.
Built With
blockstack
css3
dart
firebase
flutter
google-firebase
html5
javascript
jquery
node.js
Try it out
fishtankbuisness.s3-website-us-west-1.amazonaws.com
github.com |
9,987 | https://devpost.com/software/helpnow-0nf9x6 | Home Page
Volunteer Registration
Map Search
Organization Profile
Volunteer Profile
Chat
New Request
Request Page
Advanced Search
Inspiration
As a few of the many people stuck at home during this global health crisis, we were eager to look for potential ways to help as volunteers. However, of the many of the websites and mobile apps we encountered, we found it difficult to navigate and find opportunities local to us. With HelpNow, we wanted to fix both of these problems.
What it does
Organizations can make requests that volunteers can signup for
Volunteers and organizations can contact each other through a chat service
Volunteers can find local organizations on a map or through a search
How we built it
HelpNow is a website created using HTML/CSS and JavaScript backend to handle user input. Firebase was used to handle and store account, chat, and request data.
Challenges we ran into
A lot of effort was spent performing rigorous testing to identify and fix minor errors, especially those that appeared when the individual features were combined into one website.
Accomplishments that we're proud of
We are especially proud of our map feature, built using the Google Maps API, which provides a nice user interface for volunteers to find local organizations. We are also proud of our chat service, the first we have built with Firebase on the web.
What's next for HelpNow
We hope to extend the features of HelpNow by making it easier to use for organizations through automated services; for example, adding a system that helps organize and schedule signups for requests. There is also work that can be done to simplify the user interface.
Built With
css3
firebase
html5
javascript
Try it out
github.com |
9,987 | https://devpost.com/software/smart-intruder-detection-sytstem | As the COVID 19 has become a global pandemic due to which we face problems like:
Medical inadequacy.
Food & shelter problems for poor people.
Lack of Social distancing in crowded areas.
No direct communication to the doctors due to lockdown.
Less availability of ambulances in few areas.
Monitoring of people who were affected.
People are unaware about the location of affected people
8.Donation and NGO’s activity is not in a single Platform or an APP.
Our Solution to this:
We have bought up a solution for all the important problems said above of COVID-19 pandemic.
*Our idea solves all these difficulties discussed above with many add-ons in a single application along with a health monitoring Wrist Band.
1.Our Application gives a single Platform and meets all needs hence better than Existing solution:-
2.Communication with Doctors and Nursing graduates at Emergency situations through video and audio calls.
3.Get to know about the available Ambulances nearby through GEO Locator.
4.The people affected with covid-19 monitored by health department
5.Near by Covid Patients and self Isolated Identifier .
6.Get to know about the daily updates of Covid-19 in dashboard.
7.A Wrist band with Bluetooth connectivity is connected to this application which frequently monitors the body temperature, pulse rate, heart beat rate etc in our application
8.A Donate option is given in this application so that a person can donate money to CM or PM funds directly using trusted payment gateways.
Built With
agora
amazon-web-services
blockchain
flutter
google
google-maps |
9,987 | https://devpost.com/software/minimalist-fiction | Inspiration
Web games made with Adobe Flash that can be found on sites such as Kongregate and Armor Games.
What is it
A short story-based platformer game.
What's next for Minimalist Fiction
A sequel story.
Built With
gamermaker |
9,987 | https://devpost.com/software/alarme-an-alarm-clock-you-can-t-say-no-to | the mess of a clock ill put on my bedside table
Inspiration
Over the course of the quarantine my sleep schedule has got worse and worse. Sleep in till noon? Why not its not like I have somewhere to be. However, after setting alarms to try and wake up earlier, I simply could not get myself to wake up. If i can snooze my alarm, I can snooze all I want.
I knew it was time for something that I had a liiiiiittle less control over.
What it does
This is an alarm clock that can be set from your phone, but with no option to turn it off within arms reach, you'll have to go elsewhere. This means getting up out of your bed to a designated terminal you set up SOMEWHERE else in your house. Walk there, tap your phone to it, and BOOM the alarm turns off. Not only are you out of bed now, but you're happy you have the whole day ahead of you too!
How I built it
I used Google's Flutter SDK to develop an easy and modern looking mobile app for both iOS and Android. This app is what allows you to set the time you want the alarm to go off, and the tool you'll use to turn it off in the mornings. By tapping an NFC tag, the app will send data to the alarm clock to tell it to turn off.
The alarm clock consists of an arduino MKR1000; a small wifi-enabled arduino. This was perfect to implement a small speaker module from an old printer, and a dc motor i put off balance to act as an annoying vibrator. Running a small server from this arduino on my wifi network. I can receive data from the Flutter app so long as the arduino has power!
Challenges I ran into
Getting the arduino board to connect to my computer - my libraries got corrupted and took me hours to fix.
Timing on the vibration and buzzing of the alarm - arduino can only do one task at a time so I had to figure out a better way than using delay()
Accomplishments that I'm proud of
Connectivity between my phone and an arduino wirelessly anywhere in my house was really cool.
What I learned
Much more planning should go into how the software works on the app-side than i did, I did none and spent so much time figuring it out.
I learned a ton about arduino, soldering, mortors, small electronic components, and time management with it all.
What's next for AlarMe - An alarm clock you can't say no to!
Next is a segmented display for me to see the time at night, a more fluid app that I could actually see myself using, and reliability. WIth more time i feel this is going to be something I use to get up in the morning. I also want to encase this so it isn't a jumble of wires
Built With
arduino
buzzer
dart
flutter
nfc
Try it out
github.com |
9,987 | https://devpost.com/software/hacktogether-i1ay4x | Sign Up/Log In screen
Message Capabilities
Home Page
User Profile
Inspiration
Many beginner coders want to hack and learn how to build products, but unfortunately, don't have the resources needed. Hackathons are difficult to come across if you don't know where to look. They have fees or are geared for more experienced individuals. Teams are difficult if you're not already in a friend group of coders. Many hackathons have team-building sessions at the beginning, but that can be nerve-wracking to newbies. We even have unclaimed ideas that teams can take and build their own version.
Hack Together is a platform that does it all at once: find a hackathon, join a team, and create a product.
What it does
Create an account on Hack Together and immediately find different hackathons near you. Not sure what hackathon you want to join, there's also teams looking for more members. Just want to create on your own, we have ideas that people were willing to share with the group.
How we built it
We used Figma XD to explore UI/UX design in a collaborative way. We registered hacktogether.online with Domain.com for future use.
Challenges we ran into
Both of us are new to design so we looked up different examples to put together our prototype.
Accomplishments that we're proud of
We started the hackathon a few hours late so we were happy to find an idea that worked for us.
What we learned
We hadn't use Figma too in depth before. We learned how to use wire framing software to work efficiently.
What's next for HackTogether
We'd like to go into development next and make this functional.
Built With
figma
Try it out
www.figma.com |
9,987 | https://devpost.com/software/companion-prog | Transcript of Demo Video
Companion Prog
Inspiration
I always forget basic syntax and need to keep searching it online or wherever.
So, I made this app for forgetful people like me.
Or, for anyone who wants to learn Python, especially while stuck-at-home
What it does
Displays the syntax of certain categories of any Python syntax (that's available).
Then, it runs the program in the terminal
How I built it
Using the tkinter Python library, I created an interactive GUI.
In each category, I also coded the program to run in Terminal.
Challenges I ran into
I could not figure out how to attach the Terminal to the GUI.
Also, I got stuck on how to display the text when a user selects from the Menu
Accomplishments that I'm proud of
This is my first solo program that I had to learn an entirely new library for.
That, and my first solo program that wasn't assigned by a teacher.
What I learned
I learned how to use the tkinter library and how annoying styling is
What's next for Companion Prog
I want to add more python libraries.
But first, I need to finish the basic Python commands.
Then, I will expand it to include other languages like C++ or Java.
Built With
python
Try it out
github.com |
9,987 | https://devpost.com/software/metallictime-ibe9yu | Inspiration: To learn something new
What it does: Changes crown to silver and gold based on blink of an eye
How we built it: Using Spark AR studio
Challenges we ran into: Changes to eyeball color and setting position, texture
Accomplishments that we're proud of: Adding a crown
What we learned: How to create a filter
What's next for MetallicTime: Add different background objects and ornaments in the neck, filters for the eye and lot of interesting ideas
Built With
api
arstudio
particle
Try it out
github.com |
9,987 | https://devpost.com/software/relaxside-za2uwh | My desire to learn new technologies inspired me to participate in Hackathon.
This application enables users to select their favorite categories from the given list. From the given selection, the fortune wheel randomly picks a few tasks and displays them on the wheel. Once the user spins the wheel, the user has to do the task that the arrow points to.
This is a single page web application that is done in HTML, CSS, Bootstrap, Javascript divided into two parts. The first part is the home page and get-started page where you pick your categories and the next part is the spinning wheel. The fortune wheel is created using HTML and Javascript.
We came across a few errors that were solvable, which cannot be seen as challenges.
Completing the work within the constrained time is the accomplishment that we can put on.
We can have a profile for each user, maintain a database that tracks user choices and tasks they do. Also, We can put up some stats based on the activities and interests.
Built With
bootstrap
css
html
javascript
visual-studio |
9,987 | https://devpost.com/software/jarvis-fmlk01 | Inspiration
The inspiration for this project came from the movie Iron Man where AI can talk to humans in a much larger bandwidth and much more smoothly that it's hard to distinguish a bot and human.
What it does
It is a chatbot which talks with the user. It is built with Google Dialogflow and it can tell you jokes, recite poems, and more! While it's still basic it is learning constantly and different features are being added and tested.
How I built it
It was built with Google Dialogflow and it was trained with different user inputs and outputs. Now it can do small conversations.
Challenges I ran into
I ran into challenges such as using it with raspberry pi. I wanted to embed it with the raspberry pi and I have learned a lot from this about the board and its limitations/
Accomplishments that I'm proud of
I'm proud to be knowing Rasberry pi and python. i learned the whole python language and first time doing a rasberry pi project.
What I learned
I learned a lot of python programming language, and how to use the Rasberry pi(It was my first time).
What's next for Jarvis
Next I'll try to make it smarter and embed other platforms such as teachable machines and other powerful board. I'm trying to make an AI assistant which interacts with you smoothly and instead of only mics and speakers, attach cameras too so that it can see what you're seeing and can learn stuff visually and perform more complex tasks such as scientfic calculation in laboratories.
Built With
google-cloud
Try it out
bot.dialogflow.com |
9,987 | https://devpost.com/software/coronavirus-update-twitter-bot | twitter feed with COVID-19 update
Inspiration
We wanted to provide an easy and fast way to be informed about the current state of the pandemic in the United States.
What it does
We've created a Twitter account that automatically posts COVID-19 stats in the US daily. Each day it tweets out the total cases and deaths due to the virus, as well as the daily cases and deaths. The numbers are from the official CDC website. The twitter account is
@Coronav53389711
.
How we built it
All the automation is done using UiPath's StudioX.
1.
goes to CDC website to extract data(total cases, total deaths, daily cases, daily deaths)
2.
creates CSV file
3.
opens notepad++ and runs macro to change formatting of the gotten data, then copies it to clipboard
4.
opens twitter and pastes the data into text box, then sends the tweet
Challenges we ran into
Originally we wanted to create a website and an Alexa Skill, but there were too many roadblocks to complete it in time. Plus, if someone was willing to visit a website, they might as well visit the CDC site. So the idea was to have the information simple and easily accessible through Twitter.
Accomplishments that we're proud of
Having an end product to show.
What we learned
Learned how to use UiPath's StudioX, as well as some Alexa Skill knowledge from the research at the beginning.
Built With
notepad++
uipath |
9,987 | https://devpost.com/software/audiobox | Interface
Inspiration
I was inspired by many of the digital synthesizers I used to produce music.
What it does
The AudioBox plays a tone, users can adjust the volume and change the frequency
How I built it
I built this application using the JUCE framework and C++
Challenges I ran into
Following the documentation, I had trouble with many of the modules offered by JUCE. I wanted to make a complete syntheziser with keys, however, some of the guides in the documentation were misleading.
Accomplishments that I'm proud of
I'm proud that I got a chance to start learning this framework. At the very least, I have an application that produces sound.
What I learned
I learned that I should have pushed my code upon having a working product.
What's next for AudioBox
I'm hoping that I can implement a full keyboard with MIDI inputs!
Built With
c++
juce
Try it out
github.com |
9,987 | https://devpost.com/software/borgr | The main page, click the burger to find your nearest burger restaurant!
An example of the kind of Google Maps page it'll redirect you to.
My UiPath StudioX Setup for Outlook email automation
An example of an automatically generated UiPath email
IT'S LIVE! Try it for yourself!
borgr.space
(if your browser complains about that URL go
here
instead!)
Domain.com Best Domain Submission: borgr.space
Inspiration
Close your eyes and imagine you're on a road trip with your best buddies. Someone's stomach grumbles, you need to find somewhere to eat. It's been a long day and you don't want to argue but how will you pick where to go? That's the problem I'm trying to solve. Introducing
borgr
, my new project that sends you to the nearest burger restaurant with one click.
What it does
Using
borgr
is simple: go to borgr.space and hit the button and it'll redirect you to a google maps page for the what radar.io thinks is the nearest burger restaurant to your location.
This works by using a pair of Radar.io's APIs. First it uses the IP Geocoding API to get your coordinates from your IP address, then uses the Place Search API to find the nearest burger restaurant to you from those coordinates. Then, it'll do some
borgr magic
and generate a well formatted google maps search link with it'll then redirect you to, which lets you easily get the directions from.
In addition, my UiPath implementation means that whenever someone uses the website, I get an email saying where the borgr button led them to in an effort to create a
borgr map
in the future to show where in the world people have been wanting burgers and using borgr.
How I built it
A python3 flask server I deployed on Heroku with gunicorn hosts the site which uses HTML on the frontend. The backend heavily utilizes radar.io's IP Geocoding and Place Search APIs. I also used UiPath's StudioX to send myself the Outlook emails.
Challenges I ran into
-Learning to use Heroku
-Fighting with HTML to make the site legible
-My wifi network thinking I was in NJ and giving my devices IPs accordingly. These IPs were nowhere near any burger places :(
-Learning to use StudioX
Accomplishments that we're proud of
-This was the first hackathon I did by myself!
What's next for borgr
-Get a prettier frontend, I'm a backend guy with little to no graphic design/fancy framework knowledge
-Taking data from automated UiPath emails and doing some data visualization on them to create maps of where people are using borgr
Built With
flask
heroku
html5
python
radar.io
uipath
Try it out
borgrapp.herokuapp.com
github.com |
9,987 | https://devpost.com/software/how-long-is-it | how-long-is-it
A web app meant to help people visualize conversions between units
If you were raised on imperial units like me, you could probably tell me, roughly, how long of a 60-mile drive is. You have an idea in your head about how heavy 5 pounds is because you interact with these measurements on a daily basis. For many, this breaks down when you think about metric units! How long is a 4 km walk? Short? Supper long? My guess is that you don't know.
My app fixes this! On the site, you can see metric units equated to everyday things you would encounter!
Check it out:
www.bcarpenter.tech
In making this, I encountered a few problems. I decided for the first time to experiment with a CSS framework, Bootstrap to be specific! While it simplified a number of things and produced a nicer looking product than I would have made without it, I lacked the familiarity I needed to be able to customize. With more time, I think I could produce a much nicer looking product, however, I am happy with the simple, but fun functionality of my site. In the end, I am super proud that I was able to go from an idea to a site, too many times I've found myself starting things, but not seeing them through. This feels way better!
Try it out
github.com
bcarpenter.tech |
9,987 | https://devpost.com/software/covidseek | Inspiration
Since the beginning of this pandemic, many people globally are in a state of confusion and panic. Many healthcare systems need a way to allocate resources properly based on the density of the pandemic. Furthermore, many people do not know when this virus will keep spreading. We built COVIDSeek to answer these problems through providing an accurate visualization and predictions/forecasts of the pandemic.
What it does
COVIDSeek is a web application that connects people and healtchare systems through accurate information, and predictive analytics. Users enter their location to see a density heatmap of the virus on an international scale, which is also useful for medical practitioners and the healthcare system. They also will see the specific number of cases and deaths in their respective area on a given day. Finally, they are provided with a forecast of what cases might rise/lower to in the next 1-2 months.
How we built it
On the front end, we used html, css, and javascript through the bootstrap web framework. On the backend, we first use the google-maps api in python (through gmaps) to visualize the heatmap, and we passed this into an html file. Furthermore, we used Flask to serve the json data of the cases and deaths (across the world) to our front end, and SQLAlchemy as a way of storing data schema in our database. We use the FBProphet library to statistically forecast time-series data and future cases through Bayesian analysis, logistical growth, and predictive analytics, by factoring in trend shifts as well.
Challenges we ran into
We ran into challenges regarding the visualization of the heatmap, as well as the creation of our forecasting algorithm, as we didn't have much experience with these areas. Furthermore, serving some parts of the data to the front-end from Flask had some errors at first. It also took time to assemble data into a consolidated file for analysis, which was a bit hard in terms of finding the right content and sources
Accomplishments that we're proud of
We are proud of how much progress we've made considering how new we were to libraries such as FBProphet and Flask, and the unique, special, and effective way we learned how to implement it. We learned how to create opportunities to benefit different areas across the world through data analytics, which is something that we're very proud of doing.
What we learned
In terms of skills, Aryan learnt how to develop his front-end skills with Bootstrap and using different ways of styling. Shreyas also developed his front-end skills while working with Aryan to structure the front-end, as well as finding new skills in learning Flask and the Gmaps API. We learnt that there are numerous ways that an individual can help the world around them through computer science.
What's next for COVIDSeek
In the future, we want to add a user-interactive search bar that places a marker on their location and zooms into the map, as well as a way for users to report symptoms/cases on the map. We also want to add more features, such as nearby testing sites, hospitals, as well as nearby stores with a certain amount of resources that they might need. Overall, we want to make this web app more scalable worldwide.
Built With
bootstrap
css3
fbprophet
flask
google-maps
html5
javascript
matplotlib
numpy
pandas
python
sqlalchemy
Try it out
github.com |
9,987 | https://devpost.com/software/random-chatbot-about-life | Inspiration
Due to COVID-19 Pandemic, we are all stuck at home. And we are all really bored (at least I am). So I create this simple chatbot that can chat with you and be the best buddy with you.
What it does
This is a discord bot that you can ask it a question, it will give you a response, whether it makes sense or not.
How I built it
I built it using
chatterbot
and
discord.py
What I learned
I learned about how to build a simple chatbot!
What's next for Random Chatbot About Life
Make it more random!
Built With
discord
python
Try it out
discord.gg
github.com |
9,987 | https://devpost.com/software/spark-ar-filter-which-mlh-duck-are-you | Facebook Spark AR Sticker Challenge
Create an Instagram filter using Facebook's Spark AR.
Built With
sparkar
Try it out
instagram.com |
9,987 | https://devpost.com/software/stay-safe-g1o3td | Inspiration
It is about giving more love to each other as it is more about we instead of me. We are here as a together as a team even though we are apart, and together, we can grow.
What it does
This is a filter for Instagram to tell each other to stay home in a cute way with the hearts and high-pitched voice. However, it does convey a serious message of staying home but also spreading love, not germs.
How I built it
We built it through Spark AR. This is our first-ever project and hackathon using Spark AR and we thought it would be a fun challenge to do something that is more of our speed.
Challenges I ran into
We had no idea on how to use SparkAR as well as what to expect in a hackathon. A lot of research was done on SparkAR during the creation of our filter, but we did prevail at the end.
Accomplishments that I'm proud of
We are especially proud that we have created a filter even though it might not have been the best. We are glad that we worked together as a team and determined on what needed improvement in our current and future projects.
What I learned
Learning how to use SparkAR is a major accomplishment for all of us. We had to download the application and then learn what each and every one of the functions were. It was a difficult task but with playing around the application for a few hours, we seem to have learned the gist of it.
What's next for stay safe!
What is next for stay safe is hard to say, however, we do acknowledge that if we stay away from each other while giving each other love, we will be able to fight this altogether no matter how difficult it may be, we are together as a whole. As stated above, it is about we and not me.
Built With
sparkar |
9,987 | https://devpost.com/software/stacy-bot | Interface in FB messenger
This representation of NLP
Features which will be added more as time goes
PLEASE NOTE THIS IS A TEST BOT, AS PUBLISHING AND VALIDATION TAKES TIME, SO IF U WANT TO USE THIS THEN U NEED TO BE THE TESTER. BUT U CAN USE THE PHONE CALL FACILITY.
CALL AT: +1 463-221-4880
(This is a toll-free number based in US, if you are out of US then only minimal international charges will be applicable, I am from India and it takes 0.0065$/min)
If you want to use this app in your Facebook Messenger like shown in the video then please comment your Facebook ID in this project's comment section, I will add you as a tester to this app
IT IS JUST AN WORKING DEMONSTRATION OF MY IDEA TO TACKLE THE PROBLEM, IT CAN BE MADE AS PER THE DEMAND OF ANY ORGANISATION. AND THE BEST THING IT IS NOT A CONCEPTUAL IDEA IT IS TOTALLY A REALISTIC IDEA THAT CAN BE DEPLOYED AT ANY MOMENT ACCORDING TO THE DEMAND OF THE ORGANIZATION
Our Goal
General Perspective
Due to the situation of COVID-19 the work force of the world is decreasing(since everyone is maintaining self quarantine and social distancing ), which is creating a big havoc in the world, through this project of mine, I mainly target to tackle this problem and help the health organizations with a virtual workforce that runs 24*7 without any break, and handles all kind of mater, starting from guiding the people to fill up the forms to managing the data of the patients automatically and all-together.
Business Perspective(if required)
Bot service (it is not a company yet, I am just referring to the thing that we want to build or start this company, we are student developers right now) which adds a virtual work force to every client organisation to bloom in the market. In business perspective Our potential business targets are small business,NGO and health organisations and we help them to be free from human service cost and help them to grab more users by providing 24*7 interaction with there users, thus generating more revenue for them.
Inspiration
I really got inspired for making this advance A.I bot by seeing the current COVID-19 situations, because of these COVID-19 situations people are restricted from gathering hence work force and user interaction of various health organisation are diversely effected. Through this project I aimed to connect the health organizations with the patient anywhere in the world,using any platform(not limited by android, ios or Web). And also manage the data of the patients automatically thus reducing human effort and maintaining social distancing.
MADE THIS PROJECT TO BRING A CHANGE
.
How is our product different than others
1)
There are many types of A.I bots,where most of them are Decision tree based models that work with particular buttons only,our products will be totally based on NLP based models,which are more advanced and are in higher demands than others.
2)
Other service A.I bot service providers are confined to only 1 or 2 platforms, whereas we at the same time are providing advantage to the client to choose from a large scale of platforms like FB messenger, google assistant,slack,line,website bots and even in calls
3)
For the health organisations that are willing to buy our technology (We are also willing to donate this tech for free), from business perspective we will also be cheaper than our other competitors, when others are taking near about $3300/year for the service, we are doing it in $100-$1500 one-time fee range with more versatility.
It will totally be free for any user using it, no charges will be applicable for users
What it does
Our bot provides the power to every health organisation at such situations of COVID-19 by managing the screening,testing and quarantine data and also connecting the persons that are willing to do the test with the help of diversified digital platforms. In cases where internet is not working (where other bots won't function) still our bot works inside the phone number thus providing fruitful results in such situations.It basically covers all important aspects of an advanced A.I bot. It also connects the health organisations with volunteers that are willing to donate their time as helping hands in this hour of need.
How I built it
I built it using Google cloud A.I solutions, Google cloud Dialogflow framework(which includes automatic firebase integration) where I trained the bot with NLP with huge datasets from WHO and government and then integrated it with the Facebook messenger through Facebook Developer account. It is also supporting Phone call facility
Challenges I ran into
I had to go through many challenges, starting from being a solo developer, I really had to face a lot of problems as making such a complex app which all the advanced features as mentioned, all these things together cost me a lot of sleepless nights but i hope my hard-work pays off
Accomplishments that I'm proud of
I am really proud of the app that I made because it itself is a big milestone for a solo developer like me.
What I learned
I learned a lot of things through out this journey of developing this app, starting from advance use of Google cloud A.I solutions, Dialogflow and integrating it to Facebook messenger, making filters inside the chat-bot to enhance user experience etc.Connecting it with a phone number to receive phone calls etc.
What's next for Health Bot
If my work gets selected, then for sure I am going to work really hard to make Health Bot even bigger and to add more amazing functionalities to make my users happy.
Built With
dialogflow
facebook
google-cloud
javascript
json
Try it out
github.com |
9,988 | https://devpost.com/software/maia-cfrkbs | Inspiration
We are two sisters who were first inspired to create this product by our mom, a high school teacher for the last 30 years. Despite her best efforts, she has been struggling to get through to students during the COVID-19 pandemic. Remote learning creates a dull, impersonal environment, and even as our mom is screaming, singing, and dancing in front of her camera, her students are unengaged and unable to learn.
Watching our mom struggle allowed us to realize that there is a major problem with existing remote learning products. Our mom is one of the top teachers at her school; she was awarded Teacher of the Year by her school in 2019, she is constantly going to conferences to learn new teaching techniques, and she comes to class motivated and energetic every day. If our mom is unable to inspire students to learn during this time, there are millions of teachers across America facing the same problem. This fact motivated us to build a better solution, so that the 1.5 billion students who have been impacted by COVID-19 may continue to learn through these unprecedented times.
What it does
Maia is a web application that has reinvented remote learning. Instead of hosting online classes through the traditional layout where participants appear in separate boxes, Maia brings the classroom to life by creating live animated versions of teachers and students. Teachers can write on a virtual whiteboard; add sound effects, stickers, and videos to their lesson; and easily start polls, assignments, and games. Students can click on their character to raise their hand at any time and they can write on the interactive board when they are called on to do so.
Traditional video communication methods create a dull, impersonal environment by removing the ability to use hand gestures to engage. Our design completely reinvents this traditional solution by using virtualization to allow people that are thousands of miles apart feel like they are in the same room. This ultimately enhances focus, empathy, and motivation in any human interaction, which is a necessary development in our increasingly cyber-physical world.
How we built it
The first step we took to build our alpha prototype was to research and write the code for the facial tracking and motion capture tool. This was implemented through computer vision based sensorless motion capture, meaning that users can use an ordinary camera and no sensors are required. After the face is detected, prominent parts of the face are identified using the Histogram of Oriented Gradients (HOG) method to extract features from pixels. After the features are identified on the human face, the corresponding features on the user’s avatar are linked and will move synchronously.
Once this tool was working, we choreographed the facial movements of our animated character by pairing the motion capture tool with Adobe Animate. Next, the character’s legs, arms, and torso were animated by meticulously moving each body part throughout each second of the clip. Although choreographing the motion of the character's body parts was a very manually intensive process during the alpha prototype creation, this process will be automated in the final prototype of the software by using natural language processing (NLP) and sentiment analysis to determine the content and tone of the character’s speech.
After animating the character, the user interface of the web application was designed. The detailed designs of the main toolbar, the drawing toolbar, and the effects toolbar are described in the
Technical Product Implementation
document. The last step required to make the user interface was to design the interactive whiteboard tool. We conducted research regarding how to maintain consistency on a drawing tool when multiple people are simultaneously editing, as shown in the
Technical Product Implementation
document.
Finally, we began considering how we would market our product, who we would market it to, how much we would sell it for, and how we could expand to new markets in the future. This analysis is described in detail in the
Business Model and Market Potential
document attached to this submission.
Challenges we ran into
The biggest challenge we faced was our tight constraints on time and labor. Since we had a team of only two people and a time limit of only 7 days, we quickly realized that we had to narrow our scope and focus on the most important aspects of our design.
Additionally, we ran into several challenges while trying to create the facial tracking and motion capture tool. Although we had some experience writing facial recognition code in the past, capturing motion and translating that motion to an animated figure proved to be difficult. We overcame this challenge by thoroughly researching the technical aspects of the development of various computer vision algorithms. Beyond online resources, we contacted experts in this field to learn how they created such projects. Whereas last week we had nearly no knowledge of motion tracking beyond facial recognition, today we are close to the implementation of facial tracking and motion capture in Maia.
Accomplishments that we're proud of
Developing a complex, transformative, marketable product in seven days with only two people.
Manipulating a working facial tracking and motion capture algorithm.
Learning a complex animation software from scratch and effectively implementing it.
Validating the market need for our product by receiving 97 survey responses from teachers ranging from kindergarten to college level. This helped us validate that online education has been a major struggle for teachers across states at many grade levels. Additionally, many teachers left suggestions for features they wish their e-learning platform included, which helped guide our product design.
What we learned
Through the development of Maia, we learned how to implement and combine new technologies, including Adobe Animate, facial tracking and motion capture, and a collaborative object-based graphical editing system. Beyond that, we learned that the quality of education suffers when it is online. Through our own experiences, a survey, and open communication with current teachers and students, we learned that it is necessary to reimagine traditional online learning platforms, and Maia has done just that.
What's next for Maia
We are confident that Maia has the potential to hugely impact the e-learning market. To grow our business, we will first build out the live broadcasting and collaborative capabilities of our software. We will build a team of about 4 to 6 members to work on optimizing the animations and the UI design. We have already got in contact with a technical director at Walt Disney Animation Studios about our idea and we are in the process of setting up a call to gauge his further interest in the project.
Once our product design is finalized, we will file for a utility patent on the process of utilizing live animations in a virtual classroom and we will file for a design patent on the aesthetic of the UI and animations. If we are awarded winnings in this competition, we will use the money to invest in better animation tools and cloud storage.
We will then market our product to public, private, and online elementary schools. From here, we hope to expand Maia’s capabilities to help middle schools, high schools, universities, and eventually corporations. We really enjoyed working on Maia this past week, but it is only the beginning. We are eager to venture into the market and watch as Maia transforms the future.
Built With
adobe-animate
python |
9,988 | https://devpost.com/software/sesame-mobile-app-upz9bv | Inspiration
After talking to managers at stores in Lenox Mall, one of the largest shopping malls in Southeast, we learned that retail stores face two primary problems: either there are too many customers for the limited store capacity created by social distancing guidelines or stores lack customers altogether. Since the start of the pandemic, retail is facing the largest decline on record with a CNN estimating a 8.7% decrease in sales during March alone. Since consumer spending and retail comprise 70% of U.S. economic growth according to the U.S. Bureau of Economic Analysis, we wanted to find a safe way to encourage customers to return to retail stores, while preventing lines.
What it does
Sesame is a mobile app that benefits both consumers and corporations by allowing customers to reserve a timeslot for entry at retail stores for rewards. Consumers gain guaranteed entrance to stores upon arrival, B2C rewards, and confidence that their health is prioritized. Customers can enjoy features such as sanitation ratings, scheduling calendars, and seed rewards. Corporations can open without door attendants, automatically track crowd density, eliminate long lines, and galvanize new customer interest. Features like automatic people counting and QR code entry will help to maintain a safe store capacity without a door attendant.
How we built it
We designed the app in Figma which is a digital prototyping tool. This allowed us to make the UI for Sesame so that we can demonstrate how consumers and businesses will use our app.
Challenges we ran into
One of the challenges that we ran into was in considering the ability of all stores to use the app in order to determine the number of people in their stores. We were mainly working on using the automatic capacity counter feature for larger department stores or stores in the mall. These larger stores would have security cameras and struggle to place attendants at multiple entrances. However, we realized that some retail stores do not have security cameras. In response, we decided to create a manual count feature for stores without preexisting security cameras in order to allow attendants to keep track of crowd density and limit reservations if neccesary.
Accomplishments that we're proud of
We’re really proud of our app’s ability to eliminate attendants at the door. The automatic capacity counting feature using live security camera feed can be combined with QR code entry with a reservation in order to enter. Currently, stores have attendants at the door who manually count the number of people entering and serve as a bouncer. Large stores have even closed certain exits to limit the number of attendants required; limiting the number of entrances and exits could create dangerous traffic flow by placing people in close proximity to each other. We think that our app is a great way for large department stores to avoid using attendants and utilize more entrances and exits.
What we learned
Through our research, we learned that there are two different problems for stores: Social distancing measures are generating long lines at certain stores, while customers are sparse at other stores. Overcrowded stores pose a significant risk for transmission of COVID-19, but undercrowded stores risk bankruptcy. Sesame simultaneously solves both ends of the spectrum. We also learned that managers at companies like Guess, L'occitane, and UGG are interested in offering generous benefits to customers through reward platforms in order to tackle these problems.
What's next for Sesame Mobile App
Next, we hope to release it on the app store so that businesses and customers can begin to benefit from Sesame. We will launch a beta version for use at Lenox Mall in Atlanta and if successful will expand to other retailers and to major cities in the US like New York, Los Angeles, Houston, etc. Besides word of mouth, we plan to expand our consumer base with more advertising and referral codes to gain benefits for inviting their friends to the app. We will also release a Google Play version of Sesame to reach a larger consumer base around the US. After successfully implementing the retail version of our app we will create spinoffs of the Sesame app by creating Sesame for schools, leisure, fitness, parks, and personal care services.
Built With
figma
Try it out
www.figma.com
docs.google.com |
9,988 | https://devpost.com/software/covidhackathon2020 | COVIDHackathon2020
We were inspired by thinking about issues COVID-19 caused that affect our daily lives, specifically as life begins to open up once again. Although we did not have much previous experience in developing applications, we learned the basics of React Native to create a simple verison of the application. Additionally, we created a Figma to get an idea of what the app would visually look like here: [
https://www.figma.com/proto/c6Odp1lzEjn5dNCgFOuehP/Interspace?node-id=78%3A1&scaling=scale-down
] Other resources we used to further understand various Social Media APIs are here: Instagram: [
https://medium.com/@bruceoh/data-mining-instagram-scraper-location-tag-w-yelp-data-set-2-94728064a4c0
] Facebook: [
https://developers.facebook.com/docs/places/web/search
] and Snapchat: [
https://github.com/CaliAlec/snap-map-private-api
] We faced challenges trying to integrate the data over to the physical application. However, we believe with more experience and time with React Native it would not be too complicated of a process. We also had to consider the legality of taking this information, we ultimately concluded since the information is based on public data it did not feel ethically wrong.
Built With
facebook
instagram
javascript
react-native
snapchat
Try it out
github.com |
9,988 | https://devpost.com/software/remote-learning-ar-quiz-platform | Biological Components Quiz
Firebase Real-time DB Console
Echo AR ConsoleUnit
Inspiration
Remote learning in the time of COVID-19 has been challenging for students and teachers which requires new tools and approaches. Engagement and retention in students has demonstrated a 50% measured improvement using Augmented and Virtual Reality content.
What it does
Generates interactive quiz modules leveraging mobile AR. Students follow the video prompts and instructions. Drag and drop the 3D assets answer on top of the video instructor. Correct and Incorrect responses as well as session engagement statistics are recorded into a firebase database.
How I built it
Using the Unity platform we leveraged the AR Foundation SDK to create the augmented reality platform. The assets were downloaded and imported from CG Trader to simulate a sample molecular biology lesson plan. Video assets and 3D assets are maintained and delivered using Echo AR. The quiz statistics are stored in a real-time database file using the firebase SDK.
Accomplishments that I'm proud of
Creating the drag and drop Quiz interactions and integrating the Firebase and Echo AR platforms.
What's next for Remote Learning AR Quiz Modules
Creating CMS tools for the teachers to build, organize, share and distribute individual quiz content.
Built With
android
arfoundation
c#
echoar
firebase
samsung
unity
Try it out
drive.google.com |
9,988 | https://devpost.com/software/team-discover-xhwl0k | The problem our project solves
There are thousands of (potentially) infected people being monitored in hospitals in non-intensive rooms. These are cases that are not severe enough to be in ICU care, but if their conditions worsens, they need to be relocated there. Nurses work around the clock to help and monitor them many times a day, but current practices have huge shortcomings.
There is a shortage of protective gear and they are highly overused, which puts nurses at high risk after having so many close physical contact with patients.
Just as with the equipment, there is also lack of human resource: nurses are critical to stay healthy so that staff numbers don't drop.
Monitoring the vital signs of a patient takes about 5 minutes for a nurse, without considering the changing of gear, which amounts to a small number of people being inspected under an hour.
The measured data rarely entered and stored online, which limits any further analysis to be made.
What we bring to the table
We give nurses superpowers, by doing a 100 check-ups in the time that it used to take 1. All while being far from the patient, staying out of risk.
Our solution enables a highly scalable patient monitoring system that minimizes physical contact between nurses and patients, which also leads to smaller shortage of protective gear. Instead of occasional visits, our device measures vital parameters real-time and uploads each patient’s data into a central server. With the help of our dashboard, doctors and nurses can oversee hundred times more patients, while our automatic alert functionalities make it possible to diagnose deteriorating cases instantly and to reach quicker reaction times.
In the span of 48 hours, we have created a fully-functional pair of 3D printed glasses, allowing patients to initiate frequent measuring of their vital signs, all by themselves. These include body temperature, oxygen saturation and respiratory rate, the key values nurses regularly check on coronavirus patients.
We have bought the sensors ourselves and designed the 3D printed glasses frame to blend them into one device. This fast-prototyping only cost us 21€, and mass production would lower the cost of a device even more.
What we have done during the weekend
We have improved our 3D printed prototype, that we have created on another weekend. We had to re-assemble the sensors and performed benchmark tests to measure the accuracy of our sensors. We have consulted with multiple medical professionals on top of the ones we have already talked to earlier and were able to come up with better infrastructure for our solution. We also focused more this time on the supporting services and the infrastructure that would be required for such a device.
Our solution’s impact to the crisis
Our medical device enriched with our data analysis system is designed without the need for any specific infrastructural requirement, which allows universal usage in any country. Furthermore, hospitals, regions or even countries can collaborate and share their data to find global patterns, which opens doors for new innovations to fight the virus together. Our modular sensor design and 3D printed case allows fast mass-production and short implementation time. From the medical view, we are keeping the medical staff in a safe distance to protect them from highly infective patients. With our real-time, large-scale monitoring, nurses and doctors can filter out and deal with most pressing cases while our system keeps an eye on every other patient.
We have talked with over 15 professionals, including multiple doctors, nurses, investors and manufacturers, and they were eager to hear how fast we could get this to hospitals. After further recognition and an award from EIT Health, multiple doctors reached out to us, offering their expertise and support, which gave us another huge confidence boost in the project.
The necessities in order to continue the project
For us to scale up this project, we need partners that can help us in mass manufacturing, as we lack the experience in this area. For the manufacturing, we would need a large quantity of sensors ,injection-molding and assembling facilities. For fast delivery of the device, we also need the cooperation of hospitals, doctors and nurses to help us in testing. Their feedback is invaluable for the success and impact of our product.
The value of our solution after the crisis
Although the parameters measured by our medical device are the most informative values for COVID-19 infected people, body temperature, oxygen saturation and respiratory rate are key indicators for illnesses under normal circumstances as well. Therefore, our wearable makes everyday routine check-ups faster even in normal situations.
Another key change would be digitalization. Many hospitals still don’t have a centralized medical system and database, while our solution could start a new wave of data analysis and speed-up innovative activities in the health industry.
The available data and its analysis can also boost cross-European collaboration by sharing trends and new findings between countries, leading to more efficient and smarter future detection measures.
Team
We have multiple years of experience in hackathons and real life projects. Our team combines a multi-disciplinary knowledge of full-stack development, machine learning, design and business development. We are double-degree EIT Digital students at top universities, including KTH Royal Institute of Technology, Aalto University, Technical University of Eindhoven and Technical University of Berlin.
Márton Elődi - EIT Digital MSc Student in Human-Computer Interaction Design
- Several years of experience in software and product development
Kristóf Nagy - Electrical engineer
and professional motion graphics designer
Péter Lakatos - EIT Digital MSc Student in Data Science
- Experience in ML and business development
Miklós Knébel - EIT Digital MSc Student in Autonomous Systems
- Experience in robotics, deep learning and automation
Péter Dános - EIT Digital MSc Student in Visual Computing
- Expertise in 3D printing and design
Levente Mitnyik - EIT Digital MSc Student in Embedded Systems
- Vast knowledge of electrical engineering, micro-controllers and embedded systems.
Built With
3dprinting
arduino
autodesk-fusion-360
c++
infrared
pulsoximeter
Try it out
github.com |
9,988 | https://devpost.com/software/hopeful-home | Logo
MVP Roadmap - Agile Planning
Layers of Agile Planning
Hopeful Home is a mobile app aiming to aid, support and empower victims of domestic abuse. To keep the user’s safe, hopeful home will appear as a calculator called TrigCalc. Just another mundane app. When the app opens it completely functions as a normal calculator, so if opens nothing is of suspicion. However when you enter the correct password you can enter our app.
Functioning app:
https://youtu.be/u42eoMLNjd8
Value Propositions:
Hopeful Home app helps people experiencing domestic abuse by providing a safe Haven for users to log how they feel and gain knowledge on what will help them.
Hopeful Home helps keep people experiencing domestic abuse safe by making the app password protected and hidden by a calculator interface.
Hopeful Home app helps people experiencing domestic abuse by providing a diary feature where they can log their days. Therefore they can read over this later on for self reflection.
Hopeful Home app helps break the cycle of abuse by providing a large education element, helping the user know their rights and breaking myths.
Hopeful Home app helps tackle the 28% of failure to help domestic abuse victims because there is a lack of evidence by providing a discrete recording feature hidden behind the calculator.
Hopeful Home app helps answer the user’s questions and concerns by providing an AI chatbot with a 99% accuracy
Hopeful Home app helps users easily reach out to people in an emergency by providing an SOS button which can be programmed to call who the user desires. 3 options in case the first 2 do not answer for extra safety.
Hopeful Home app helps users to discover patterns in their feelings by integrating a tone analyser.
Hopeful Home app helps people from different backgrounds by having different translations
Inspiration:
During these tough and unknowing times, we decided that for us, as a team the most important aspect is the mental and physical well being of those who are lucky enough to not be suffering with COVID-19. In particular, we have chosen to look at a shocking statistic and work towards lowering it as part of our achievements from the app. Domestic Abuse figures have gone up 25% since the lockdown has started across the world, and more people now than ever before are facing aggressors in the household. With the outside world not being safe anymore, we aim to make the home a safe environment for as many as we can by educating, aiding and empowering those who are victims to abuse. Hopeful Home is an app reaching out to all genders, all races and all ages in the UK. It is critical that in this moment in time, people are supported and not isolated and this is where Hopeful Home comes in. Our inspiration also came from the women we talked to and what they felt was crucial for the app. We spoke to many women and all their feedback was integrated into the idea, such as adding a GPS tracker for the closest help service and having a place to upload Screenshots of the app. In addition to this, we were inspired immensely by the great speakers who gave up their time to come and speak to us during the live streams. Some in particular include the speaks on the first day (Mandy Sanghera and Ganesh) as they stood for similar problems that we stood for as well. On top of that, our mentors that we worked with were a huge inspiration to us, and the ones who believed in us and persisted to keep us going, even when the coding was going all wrong, and the designs were terrible. From them we have sparked a light in the correct direction and hope to help others, as they have helped us.
What it does:
To an unknowing eye, Hopeful Home is a simple mundane calculator app, named Trig Calc. However, using the calculator keys, typing in the pin code, takes you into the real app. Safety is the most important aspect to us. Therefore, the pin of the app will change weekly. Incorporated into our app is the AI chat bot. The chatbot is named Haven; a place of safety.The chatbot answers any concerns or questions the user has. Hopeful home also has a diary feature to write in accounts of their day. They can use this to keep track of their days and for self reflection. and uniquely, an audio-recording element. Unlike traditional microphone features, whilst recording, the screen is hidden via the calculator which is an extra security element. To empower the victim, we have also integrated education including their legal rights, preventative advice, places for support and positivity. From the women we spoke to we found they were not sure about what to do and their rights and options - so that is why the education element is so important.
How we built it:
We first began by using Agile Planning to ensure we would work smoothly. We made an MVP roadmap and outlined the 5 layers for agile planning. This was also accompanied with our Trello Board. We conducted extensive research for the entire 9 days and gained extra knowledge from our calls with Barnardo's Phoenix Project. We firstly began with a design specification which featured elements which victims (which we spoke to) said would be impactful. We then made a chatbot from scratch and built our own model in Python. Then we began making interactive wireframes (which are linked below) to portray everything. Furthermore, our developers then began making a Minimal Viable Project in Swift.
Challenges we ran into:
Thanks to great team work and communication, there were very few problems that we ran into that we couldn't solve with hard work. Firstly, a problem we faced was integrating the python chatbot into our swift app promptly. So we came together as a team and decided we would transfer the chatbot we made from scratch into DialogFlow. (A natural language understanding platform used to design and integrate a conversational user interface) which was easier to integrate in our app. Another problem we faced, specifically the developers, was trying to link all of the different codes together to make a fully working prototype of the app. Design was not easy either, as we had too many things that we were trying to cram into the design. We then decided that it would be best if we went for a single colour scheme (blue) and not to bright but more pacifying shades of blue. We also then after a lot of trial and error, decided to make the illustrations on both the website and the app hand drawn so that we could give it a more homely feel, especially since the images we had seen previously weren't giving the correct ambience that we hoped. We also found it difficult to add a scroll view in Swift
Accomplishments that we're proud of
As a whole we are all proud of ourselves and each other for building a potential app which could work and support a part of society that is vulnerable at the moment. Some of our accomplishments include, building the wireframes, building a website from scratch, personalising and designing our website to suit our needs, and actually coding the app on swift. Then we linked the codes together so that the app actually functions, We also did mass amounts of research and got in touch with a lot of people and charities such as Barnardo's and Women in Aid who are proud of the app, and see lots of potential in it. We have accomplished most of the app being made and then the website as well. We have also accomplished a lot in terms of development of the idea of the app, and the improvements that have been implemented through the feedback. Our pitch was also a very hard part of our project, as it had to incorporate all of the elements into 3 minutes. We worked hard and accomplished a lot to get the pitch and the video to be to the best of our ability.
What we learned
We have learnt a lot from this experience, including teamwork, collaboration, communication, hard work and a lot more. Our problem solving skills were enhanced and our friendship with others bonded. We also learnt how to code in ways we had never learnt before, and everything from the AI to the design developing was a new learning curve for the team. We learnt the most intricate things about the different types of design, and the most advanced things like how to make a pitch and sell your idea, and how to fund different companies. Furthermore, we learnt to trust and work together as a team, collaborating and communicating to make all of our small goals in the project be fulfilling, so that we could reach a larger goal as a team of building the app. We also learnt the importance of idea developing and feedback, and that the first idea is very rarely the best idea. When we heard feedback from different people, we got perspectives that we had never even thought about before. For example, when speaking Mandy Sanghera an international human rights activist, we were suggested to put the app into different languages for people who couldn't read English, and this was something we had just glossed over as the team. Also, after speaking to Barnardo's we understood why self-defence is ultimately the victim inflicting harm and asking for more violence. By trialling and testing the app with different people, we got such a wide variation of feedback that we are so grateful for, which assisted us in learning the importance of frame of mind and views.
What's next for Hopeful Home
After contacting many charities and people with experience in domestic abuse, we were told how much potential our app has. Barnardos and Women's aid both proposed to work with us, and have made connections with forced marriage unit, Coventry university and foreign office. We will decide to make our app into either a charity or a social enterprise, in which it can self-fund itself. Having great mentors, (business and design and Mandy Sanghera coming on board as our adviser), we believe that we can work towards helping millions of people, (soon internationally as well) to face abuse and be forced against their human rights. Also we want to make Hopeful Home available on all devices
Built With
html
python
swift
Try it out
console.dialogflow.com
hopefulhome.glitch.me
trello.com
marvelapp.com
jamboard.google.com
github.com
docs.google.com
docs.google.com |
9,988 | https://devpost.com/software/ma3ak | Inspiration
Egypt and other countries need to flatten the curve. we cannot stop our lives for a disease. The consequences would be worse than catching the virus. worst scenario may include country economic collapse.
Taken into consideration the special circumstances for our country country population 100M people literacy for the size of the crisis we’re facing we can never rely on the citizen awareness to stop the spread of this pandemic
What it does
it uses positionning services for here company and the MOH data to track the suspected patients and medical staff, in order to give alarm for action for the MOH operations room to expect a Spike for COVID-19 in that areas , and disinfect them to prevent it. Also, it's useful system for tracking the historical data for the confirmed patients, in order to be accurate for the trace followup, as some poeple wont be able to remember the places they've visted and the people they're met around, but the BTS of the mobile network, and the Positionning servers for Here technologies can remember uptil 30 days historical data.
How I built it
I rely on the positionning service for Here Maps, ad process these tracking data using Mathworks software, as they've got suscess story with Databricksand Kafka.
Challenges I ran into
The Egyptian PandemicTechHack, and the Global Hack, and the Hackthecrisis NL
Accomplishments that I'm proud of
Finalist for the egyptian Pandemic TechHack
What I learned
how to give a pitch. and how to build a team.
What's next for Ma3ak
we're working with other governments in Europe to have the system adopted.
Built With
apis
blockchain
rust
trio |
9,988 | https://devpost.com/software/facetag-for-seemless-journeys | Inspiration
In today's increasingly populated world, and current Covid-19 situation it has shocked the world, it has become increasingly difficult to travel using public transports due to the bottlenecks it consists when people are moving in and out of the system. We are focusing on building a solution for bottlenecked gateways in the daily commute; one such location would be the entry and exit points at metro stations and Airports. The idea is to use Azure's Cognitive API (facial recognition) to have an automated payment system for public transports (using an online wallet). Imagine FastTag-based tollways for human faces without the need for any hardware / physical cards. You can simply walk in; we'll scan the face and deduct the costs from your wallet. This enables the people to just walk in, without any additional hardware (and/or RFID cards) and move through the public transport system without any wait times or unnecessary issues.
What it does
User story
A real-time video solution that can identify individuals and make payments based on the distance they have travelled.
Example for Metro stations and train stations -
# The first example has two steps
Scenario: User enters into X metro station
Then the facial recognition platform looks for the user in the database.
Then An API is called for the transaction of blockchain contract to init
# The second example has three steps
Scenario: User Exits from Y metro station
Then The facial recognition looks for the user in the database
Then it calculates the fare for the distance travelled
Then An API is called for the deduction of fare from the wallet
How we built it
Challenges we ran into
Building this hack was by far the most challenging thing any member of this team had done.
There were 2 people working all night on reducing the lag in face recognition and making it as fast as possible, with ranging from testing multiple models, solutions, and methodologies and the other two members were tirelessly building the code that powers the solution on the blockchain. On top of that, the testnet for Matic crashed in the early morning, and we were unable to move forward with our testing. Unfortunately, we were any able to deploy our code to Matic.
Accomplishments that we're proud of
We were able to make the face recognition work with 13 fps. which makes the project making it completely realtime.
What we learned
WWith the help of internet and amazing blogs by steve Harley we learned azure cognitive services, Many new things to mobile app development in Xamarin. We also learned a few aspects of blockchain.
What's next for FaceTag - For seamless journeys
Future Additions
Density check-in a Public transportations systems
# Future addition would be to add a density check of people using azure object detection
in all metro stations to help with the wait time of the metro in a particular station
at a particular time based on the density and azure analytics
Built With
azure
flask
machine-learning
python
Try it out
github.com |
9,988 | https://devpost.com/software/clearfit-mask | ClearFit Mask
Inspiration
Initial
inspiration
came from the fact that there was a shortage of masks available to the general public and to essential workers. We saw the problem standard masks had by not allowing for
facial expressions
to be shown and not providing a
good fit
for every user.
What it does
We created the ClearFit Mask out of a clear PETG plastic so that lip movement and expressions can be seen and so that there is a customized fit to each users face.
How we built it
We designed the mask in CAD and 3D printed the prototype.
Challenges we ran into
During the process of creating the prototype, it was a challenge not being able to see the prototype come to life when campus labs were closed and there was a scarcity for 3D printers. Towards the end of the event, there were technical issues due to internet loss and modems burning out, but we were able to push through.
Accomplishments that we're proud of
We learned to communicate more effectively and make do with what we had in our homes to convey designs ideas.
What we learned
We learned that communication is very important when it comes to getting an end result with a due date.
What's next for ClearFit Mask
The future for ClearFit Mask is to bring the masks to mass production so that people may actually benefit from it and so that we may have a healthier and safer society. |
9,988 | https://devpost.com/software/breathe-smartly | Inspiration
يوجد الكثير من المواد الضارة والغازات السامة والغبار
What it does
تنقية الهواء من اى مواد ضارة
How I built it
باستخدام مكونات معدنية ومادة mof
Challenges I ran into
Health
اخترت الصحة لان يوجد اشخاص كثيرة اصابو بضيق تنفس بسبب المواد الضارة بالهواء من ادى الى فيروس كورونا
Accomplishments that I'm proud of
دخول المحاضرات
What I learned
Alot such as artificial intelligence
What's next for Breathe smartly
التنفس دون اضرار
Built With
html
javascript
python |
9,988 | https://devpost.com/software/healthvoice | https://sagarbk0.github.io/HealthVoice/
Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for HealthVoice
Built With
echoar
firebase
Try it out
github.com |
9,988 | https://devpost.com/software/vr-enabled-mobile-icu | Inspiration and Background:
Inspiration for this project, as a continuation of the Winning Hackathon submission titled "On-Demand Intensive Care Patient Rooms," was the result of outreach by the Stevens Faculty and Team. A special thanks to Premal, Executive In Residence at Stevens Institute, for encouraging our participation in hack 2. As a result, the whole team was again encouraged to come together and go above and beyond. Since the first Hackathon, 42 Mobile ICUs have been deployed and used by COVID-19 Patients. With a total of a six-week delivery timeline from concept to delivery, the design, as presented in the first Stevens Hackathon, brought relief to the healthcare workers and patients affected by COVID during this unprecedented pandemic. Our challenge was to innovate upon our existing product, within the area of study and coordinating fieldwork during the Pandemic. Our goal was to implement Virtual and Augmented reality solutions that minimized work for field installation teams and provided a paperless and streamlined roadmap for assembly and modification of the Mobile ICU.
AR-VR Solution: Age-in-place and Post-Assembly response to "Social Distancing" challenges:
The design and deployment of Mobile ICUs' conceptualization to delivery was met with many physical and design constraints from mobile foundation design to medical ventilation and their components. In a Post-COVID-19 world, the immediate concern becomes the model for dis-assembly and re-assembly. In other words, how can we recover, re-use, and redeploy these Mobile ICU units given the constraints caused by COVID-19.
How WE built it:
As we did during Healthack 1, Healthack 2 started with a meeting of the minds. Conceptually AR-VR has been in the market or some time, albeit in silo-ed applications. The challenge would be to import the Mobile ICU model, conceptualize a productivity outcome and utilize Augmented reality and Virtual reality to Identify components, plan for assembly and dis-assembly in the field, and have digital communications moderate and minimize the need for social contact. BMarko Structures and our team worked closely with was Matt Stevenson of OUR SNRG. It was clear that there was an opportunity to integrate the facility configurations and the Virtual and Augmented tools available to the industry.
Challenges we ran into and the and "Small Wins":
Software compatibility leaves much to be desired across construction compatible design platforms. Specifically, the Augmented and Virtual reality interface was a challenging suite to modify and update. With limited functionality, but successful in overlaying different materials and components, a significant factor of the success of this iteration is the ability to tap and expand on equipment specifications, and a true-to-measurement accuracy of existing systems. This will enable field modifications, upgrades, and other add-ons with real-time overlay access to installation instruction without the physical presence of field supervision unless required by safety protocol.
What's next for ICU Patient Rooms JIT/On-Demand:
Our next design integration is to add elements of field-tested Falls prediction technology, cardio-vascular risk prediction research. The Mobile ICU Units continue to be supplied to hospital systems and other agencies with different life-cycle plans (dis-assembly vs. age-in-place). The concept will broaden to other Mobile Healthcare Lab Facilities as the need for COVID related facilities continues or is hopefully reduced. We are seeking to network with other healthcare systems in the Northeastern USA to supply these mobile and relocatable structures.
Built With
autodesk
python
revit
sketchup |
9,988 | https://devpost.com/software/crowdcount-feld1x | 1. Inspiration
The idea for this project was sparked by a problem our team members noticed in their everyday lives: when going outside for essential actions (e.g. grocery shopping), it was extremely difficult to actually decide what time to go because they had no idea how crowded a location was. Google Maps gave an estimate for restaurants based on past data, but other locations such as public parks and schools didn’t even have an estimate.
As a direct result of this, our team decided to develop CrowdCount to address these concerns. CrowdCount not only includes data on schools and parks, but it also updates a webpage in real-time rather than relying on past data to extrapolate predictions. The real-time aspect makes this more useful for consumers and business owners alike.
2. What it does
CrowdCount is a brand-new project created for the COVID Hackathon II hosted by the Stevens
Venture Center. This site utilizes state-of-the-art computer vision to track the number of people
entering and exiting an establishment and updates a database in real-time, allowing business owners to get a live feed of how busy their location is. Not only would this enable earlier re-opening for businesses, but it would also create an environment that is safe for owners, workers, and patrons alike. Additionally, CrowdCount offers an easy-to-use, search-based website that allows members of the public to see both real-time and daily average data for
crowd sizes at establishments of their choice.
3. How we built it
Our team developed CrowdCount over the course of 9 days, including frontend development with HTML/CSS,
backend development with Django, SQLite databases, and our own custom API, and open-source computer vision software using OpenCV. Even though our team members were spread out across the continental U.S., we used collaborative tools such as GitHub and Slack in order to effectively communicate and produce a polished, working product.
4. Challenges we ran into
It goes without saying that the situation the world is in has created a very unique project development experience for everyone. Since our team members were all in different time zones, finding ways to collaborate in real-time was a big challenge. In regards to technical challenges, the greatest difficulty was just the fact that we were working with new tools. This meant a lot of time was spent correcting our own mistakes because we were unfamiliar with syntax or style. In particular, creating a fully functioning search bar that accessed and searched through a database was a big technical hurdle we overcame.
5. Accomplishments we're proud of
We’re very proud of the end product that we’ve created and all the learning that took place throughout the development process, but the thing that really stands out to us is how well our team members collaborated despite the big separation between us all. Since our team members were all in different time zones, communication was a difficult task for all of us — we had to be very careful not to accidentally change someone else’s work or to forget to mention a necessary detail. This is easier said than done, but our team managed to avoid any of these problems, and we’re extremely proud of this fact.
6. What we've learned
Going into this project, we wanted to challenge ourselves to learn to work with new tools we haven't had much exposure to. Our team members have varying levels of technical proficiency, so each member learned different skills through creating CrowdCount. Through this project, one member learned the fundamentals of front-end web design (i.e. using HTML, CSS, and Javascript), another got to familiarize himself with computer vision concepts, and someone even got to learn simple things such as how to work with GitHub.
7. What's next for this project
Though we're extremely proud of the work we've done, we understand that CrowdCount, in its current state, is only the beginning of adjusting to life post-COVID-19. For the future, we hope to make our website more interactive and include robust visualizations of our data in order to provide even more to the public. Additionally, we hope to create a weekly reporting system that we can provide to businesses so that they can track changes in their crowd size over time and plan for days of the week that are busier on average.
Additionally, we hope to begin integrating CrowdCount into our daily lives. One important area of us students’ lives that
we feel that CrowdCount can specifically impact is university buildings — if universities can carefully monitor the number of people in a lecture hall or library, they will be more confident in their ability to protect students from COVID-19 and can accelerate the timeline of students returning to campus and receiving a high-quality education.
Built With
css
django
html5
opencast
sqlite
Try it out
github.com |
9,988 | https://devpost.com/software/nextstep-ai | Inspiration
Our inspiration stemmed from the study of how important fitness is to boost immunity during these times. When the coronavirus was declared a pandemic and people started self isolating to stay safe, an equally important issue was brought to light. With gyms and fitness centers closed, it became all the more crucial to promote workout from home. If working from home can be the new normal so can workout from home. We wanted to give trainers a chance to train and continue working during these times. A lot of trainers have lost jobs or are under income freeze and this platform will help them build a reputation with users personally. The users on the other hand get a chance to keep a regular workout routine by indulging an almost real-gym like virtual experience with added performance analytics and weekly reports.
What it does
nextstep.ai is a virtual fitness platform that integrates trainers to go live, record fun, interactive fitness sessions of different varieties and build a reputation for themselves and their fitness centers. The users can register for 2 free, live sessions per day. They will be asked to give access to their cameras and all their movements are recorded, compared to the trainer's movements and a similarity score is generated between 0 and 1 (1 being exactly similar) and further analytics are computed to give users a feel of their performances, motivation to attend another session, competition between friends and trying different categories of different levels.
How we built it
We used Tensorflow lite and posenet to build it. PoseNet is a vision model that can be used to estimate the pose of a person in an image or video by estimating where key body joints are. Pose estimation refers to computer vision techniques that detect human figures in images and videos, so that one could determine, for example, where someone’s elbow shows up in an image. The key points detected are indexed by "Part ID", with a confidence score between 0.0 and 1.0, 1.0 being the highest also called as Pose Match Score. It uses TensorFlow.js along with PoseNet Libraries to provide the confidence score for not only single person pose but also for multiple person in a single frame be it trainers or users.
Challenges we ran into
Building a pose detection algorithm was a real challenge because such models have been exclusive research papers spanning over months. We used tensorflow lite and a prebuilt posenet algorithm to test on the fitness videos. This was a new technology for us. Getting sensible similarity scores and building analytical computations was another challenge.
Accomplishments that we're proud of
Our idea was our biggest strength in inspiring us to work on this app. Our commitment to the new normal and to help fitness centers recover from losses. Our pose detection algorithm is another pioneer for in the virtual experiences domain that helps quantify workouts, something not feasible at gyms. We also are proud of our dashboard of analytics which is indeed our selling point.
What we learned
We learned the posenet architecture and the tensorflow lite platform and the usage to build our customized pose detection algorithms. We also studied the different fitness apps similar to ours and their business plans. We learned how to write our own unique business plan that makes our service stand out. We also learned how to write different computations to generate scores, reports and analysis of our users' performances.
What's next for nextstep.ai
Custom diet plans
Basic diet charts with calorie counts for different meals and suggestions based on preferences in the free version
Custom diet plans generated using AI and integration of a chatbot to discuss any queries in the premium version
Community Building:
Building a community of bloggers, fitness enthusiasts and gym freaks to share their performance reports, diet ideas, recipes, articles and images
Rate, like and comment on various fitness center profiles and each workout session so users can look at different categories and decide on what session they'd attend today
Built With
python
tensorflow
Try it out
docs.google.com
drive.google.com |
9,988 | https://devpost.com/software/zoom-education-suite-zest | screenshot of the final working state
screenshot of video feed going to our facial expression analysis system
screenshot of results of facial expression analysis
Closed Captioning Example
Inspiration
A worldwide pandemic has pushed education out of the classroom and into online forums for much of the world. Zoom has emerged as one of the services leading the way in the transition to internet lessons. While certainly a capable platform in its current state, Zoom possesses some inherent limitations as a classroom substitute. We at Zest believe optimizations could be made to improve user accessibility along with providing better engagement metrics to the instructor.
What it does
Zest helps professors better interact with those in their class and track their students' comprehension of the material with numerous ways to collect more data about classroom engagement. (i.e. Total number of hands raised, class attendance, call on students)
Zest creates a more accessible online classroom with its closed captioning service. This allows users with limited hearing to follow along more closely which improves usability.
How we built it
As a result of the fact that Zoom offers no API for a lot of what we wanted to track (see the next section for details), we built a bot using python and selenium to join the call (headless-ly) and collect all the information from the browser client in the background of the host's computer.
The data gathered using our python + selenium component is fed into our python + tkinter interface that is displayed on the host's computer, alongside their Zoom client.
Connected the Twilio API with Google's Speech to Text API to call into and get live audio from the Zoom call and provide live closed captioning during calls
Used Google's Computer Vision for Image Recognition: Using Google Video Intelligence to locate frames that included a person, this would then be passed onto Vision API which would assess the emotions exhibited by those in the frame.
Challenges we ran into
Zoom has no API for accessing a lot of the features we wanted to use, like the number of people raising their hands, the ability to send messages, the ability to get current users, etc.
While we had success with actually doing recognition of facial expressions and live closed captioning, while they
do
work, we did not have time to integrate their results directly into our local client that contains the rest of the features.
Unfamiliar with Google's cloud computing platform, we had some struggles integrating their Vision API for emotion detection. One of the bigger issues here was in extracting single frames from Zoom to run analysis.
Accomplishments that we're proud of
Built a self-contained, fairly full-featured client to interface with the Zoom client headless-ly
Got live closed-captioning working in Zoom (see streamable link in additional links section for demo video of that, separately)
What we learned
Throughout the hackathon we were in contact with the Google Cloud API through various manners, whether for subtitles or image recognition. We were also able to better grasp how large scale third-party applications can make use of API's such as Zoom's to offer a different experience and/or a better one.
What's next for Zoom Education Suite (ZEST)
Full integration of the results of all of our features (closed-captioning and facial expression analysis) into the main python + tkinter client that already serves everything from our python + selenium interface
Bot-prompted activities, games, live-video labeling, etc.
Built With
gcp
google-cloud
python
selenium
tkinter
Try it out
zest.pw
github.com
streamable.com |
9,988 | https://devpost.com/software/cordet-an-online-tool-for-detection-of-covid-19-from-cxrs | Inspiration
Medical Imaging using computational methods is a very promising area of work right now. The current computer vision models have the potential to drastically transform the landscape right now. This will not only help in providing quality health care to more people but also will reduce the burnout of people in the medical industry.
What it does
CorDet is an online tool for detection of COVID-19 from chest xray radiographs
How I built it
CorDet main CNN classifier architecture follows a MobileNet V1 CNN architecture. This model is then trained by transfer learning on dataset of 250+ chest xrays with both the positive and negative classes equally distributed. The dataset was subjected to image augmentations etc. to prevent overfitting.
Challenges I ran into
Managing the strict memory limits on heroku dynos was not trivial. Had to change the network architecture many times to come up with a memory efficient CNN.
The Dataset
The dataset used here consisted of around 125 COVID-19 positive and around the same number of COVID-19 negative CXRs. COVID-19 positive images came from dataset published on this Github repo maintained by University of Montreal. COVID-19 negative CXRs came from a kaggle challenge
There is clearly lesser datat than what you would expect but with time better quality datasets will be available!
What I learned
Taking a trained CNN and integrating it with web applications is not as easy as it sounds. It involves plenty of work and sweat!
What's next for CorDet : An Online tool for detection of COVID-19 from CXRs
Improving the accuracy of classisifers, adding the support for CT-Scans. There is also a possibility of CNN able to diagnose people on the basis of cough sounds. Once there is enough data for the classfier available in the public this feature could be integrated.
Built With
flask
heroku
keras
python
tensorflow
Try it out
corvizapis.herokuapp.com
github.com |
9,988 | https://devpost.com/software/iddonate-i-d-donate | Domain.com submission: iddonate.space, registered from domain.com's MLH code (I'd Donate, covId)
Inspiration
Amidst an unexpected and untimely pandemic, we’re all aware that these are times of need, and that there is always someone out there that we can help. This is why we created IdDonate (cov*
Id
*), or I’d Donate (I would donate) - a platform where users can search for, as well as donate and showcase items for people in need during this time. We know it can be hard to access certain necessities for some, and this hardship inspired our project to alleviate any difficulties.
What it does
Our platform primarily targets users who are in need. Donors and users in need can both register on our website. In particular, donors can “add an item,” that they wish to donate, enclosing a title, description (including their contact information), and an image. This item is automatically added to the central dashboard, where users in need can view this dashboard and note the contact information to contact the donors offline.
How we built it
Used the Python-Flask starter template with MongoDB, implemented a simple registration/login system with MongoDB, (deployed on Heroku), with a front-end employing Bootstrap’s layouts for the navigation sidebar and product item cards. We then registered a domain from domain.com to combine our final project into deployment.
Challenges we ran into
Google’s OAuth was very challenging to use for us, so we implemented our own login system with MongoDB. As well, this was the first time we used Flask and MongoDB, so we ran into many challenges from that side. Finally, with the time crunch that we had, it called for a lot of sleeplessness and our adaptability.
Accomplishments that we're proud of
We are proud that we managed to come up with a viable idea that could be used in the world today, and that we got a final product that encompasses our original vision.
What we learned
Using Flask to build a large-scale web app, MongoDB for login authentication, Heroku for hosting, and how to put everything together.
What's next for IdDonate (I'd Donate)
Expanding this to more platforms, such as Android and iOS, and launching this product to users across the world, so that we can all come together in this fight again COVID-19.
Built With
bootstrap
css
flask
heroku
html
javascript
mongodb
python
Try it out
github.com
iddonate.space
openhackscoviddonation.herokuapp.com |
9,988 | https://devpost.com/software/mood-swings | Inspiration
We were inspired by wanting to break out of constantly being worried, anxious, and stuck in a cycle. Finding new music and media is helpful in breaking the monotony of quarantine, and with specifically upbeat recommendations, new music can help lift the mood and make someone's day a little better.
What it does
It uses the Spotify Web API to give personalized, mood specific music recommendations.
How we built it
Using GitHub, the Spotify Web API and JavaScript.
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Mood Swings
Built With
github
javascript
spotify
Try it out
github.com
jcinqueg.github.io |
9,988 | https://devpost.com/software/seropass-37b8sl | Strategy
Overview
Services and Revenue Model
SeroPass Prototype
Intro
Please see images attached.
Built With
adobe-xd
flutter
Try it out
xd.adobe.com
github.com |
9,988 | https://devpost.com/software/scavenge | Inspiration
With interpersonal contact at a minimum due to the ongoing pandemic, I wanted to come up with a way to get outside and engage with your local environment, without needing to break social distancing guidelines.
What it does
Using your zip code, it checks the
National Wildlife Federation
's website to find
plants local to your area
, then builds a scavenger hunt from that list (includes pictures).
How I built it
I parsed responses from the
National Wildlife Federation
's website, after sending them requests with custom cookies, to avoid having to enter the zip code manually. Then, I built a scavenger hunt object with the parsed information for easier usage, and used that as the backend for a (fairly basic) app that allows the user to actually perform the scavenger hunt.
Challenges I ran into
I've never built a mobile application before (and only one GUI altogether), so that was really tough to figure out. I spent the vast majority of my time trying to make the GUI and backend play nicely together.
Accomplishments that I'm proud of
The app is fairly complete, and I intend to list in on mobile appstores, so my friends (and maybe others) can play around with it.
What I learned
I learned how to build a complete, cross-platform mobile application. I've also confirmed my preference for backend coding over frontend stuff.
What's next for Scavenge!
As I said earlier, I do plan to list it on the Google Play Store and iOS App Store.
Built With
kivy
python
requests
Try it out
github.com
scavenge.space |
9,988 | https://devpost.com/software/covid-19-symptom-tracker-ql781u | Research
Research
Research
Research
Symptom Research
Research
Homepage Design
Symptom Timeline Design
Symptom Tracker
Inspiration
There are a lot of COVID-19 resources available, so we wanted to create one place where they can all be accessed. We tailored it towards college students in hopes of helping our community feel safer when returning to campus in the fall.
What it does
Our website provides general information and resources students can use if they feel they or someone they know has COVID-19. There is a link to Health and Wellness at the University of New Hampshire which makes it easy to schedule appointments if necessary. The symptom quiz allows students to enter the symptoms they are experiencing to help determine the likelihood of them having COVID-19. How students answer each question determines what the next one will be. If a student is logged into their account, they can track their symptoms daily in our symptom tracker. All information will be saved and they can view how their symptoms have changed over time. In the unfortunate case that a student is diagnosed with COVID-19, they can visit our symptom timeline section. This is where a comprehensive guideline to the typical trajectory of the virus can be found. Our website allows students to take control of their health as well as make them feel more comfortable with returning to campus in the fall.
How I built it
We built this website using HTML, CSS, and Angular CLI. We each worked on one HTML and CSS section of the website and then one person put them all together in an Angular application.
Challenges I ran into
A challenge we faced was our level of coding experience. We are college students, each with 0-2 years of coding experience, so we had to teach ourselves and each other as we went.
Accomplishments that I'm proud of
I am proud of my team for working together virtually and being able to reach our goal for this project. None of us have worked together before and this was most of our first time participating in a Hackathon.
What I learned
During this process I learned what goes in to creating a website. I also taught myself basic HTML and CSS, as I had no experience with either.
What's next for COVID-19 Symptom Tracker
First, we would finish implementing certain features and cleaning up the code for the existing features. We would also keep the information we have on COVID-19 as up to date as possible. Our goal is to have colleges and universities use our website as a resource for their students. When we were creating the website, we envisioned the University of New Hampshire adding a link to it in the UNH Mobile app. This would help spread the word to students about the website as well as provide an easy way for them to access it. In the future, this website will be more generalized for all colleges and universities to use.
Built With
angular.js
css
html
typescript
Try it out
github.com |
9,988 | https://devpost.com/software/mass_transit_covid | mass_transit_COVID
Mass Transit algorithm that optimized distance between passengers using seat assignment
Problem
Carriers are burning through between $10 and $12 billion a month due to travel halt.
National average of roughly one passenger for every 20 seats on Airlines.
There is no lack of legislation in regards to a mass transit passenger placement safety protocol. Currently, passenger safety is up to discretion of the flight attendant and digital selection of seats when purchasing a ticket. The CDC, WHO, TSA, and Airlines for America can fill this gap with by using an algorithm such as this.
Solution
Using input of rows, columns, and passengers, the algorithm can determine the optimal passenger placement.
Troubles / Bugs
I had trouble dealing with handling and merging 2D arrays. Furthermore, in its current state, the algorithm has flaws with dynamically adjusting to extra rows.
Future Improvements
I look to update the algorithm using shortest path/graph theory algorithms such as Floyd-Warshall algorithm or Dijkstra's algorithm. Moreover, when I began the project, I knew graph theory and dynamic programming would be a better route to a solution but am excited to work towards their implantation after the HealthHack.
Built With
python
Try it out
github.com |
9,988 | https://devpost.com/software/trackerxr | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for TrackerXR
TrackerXR is a COVID-19 tracker enhanced by an XR interface.
Try it out
bitbucket.org |
9,988 | https://devpost.com/software/covidoom | Title Screen
Freedoom Phase 1
Doom 1
Doom 1, Spider Mastermind Fight
Inspiration
I have always wanted to mod the classic FPS Doom, but never had the drive until now.
What it does
This WAD changes how some weapons work, as well as the look of weapons/projectiles/enemies.
How I built it
I used Slade for the WAD building and modifying the weapons. Everything was drawn with GIMP.
Challenges I ran into
I have never used Slade before, nor have I programmed for Doom. Learning how objects in Doom work was hard at first, but the basics are not as bad as they seem, and the ZDoom website provided a ton of helpful documentation.
Accomplishments that I'm proud of
At first this was just going to be a sprite redraw, but once I learned what else Slade could do I decided to put more time and effort into this project.
What I learned
I learned how Slade and Doom works, as well as some neat tricks from GIMP.
What's next for COVIDoom
COVIDoom will most likely end here, as a fun first mod of Doom. This said, bigger Doom project ideas are coming to me, and hopefully will come out during my downtime (which I have a lot more of.)
Ultimately, I hope COVIDoom allows people to take out their frustration on the stressful situation everyone finds themselves in now and is able to have some fun during their times spent at home.
Built With
gimp
slade
Try it out
github.com |
9,988 | https://devpost.com/software/divoc-5p2g0o | Teacher Dashboard
Utilities
Students Joined
Flowchart
Inspiration
There is an old saying,
The Show Must Go On
, which kept me thinking and finding out a way to connect teachers and students virtually and allow teachers to take lectures from home and to develop a completely open source and free platform different from the other major paid platforms.
What it does
This website is completely an open source and free tool to use
This website whose link is provided below, allows a teacher to share his / her live screen and audio to all the students connected to meeting by the Meeting ID and Password shared by the teacher.
Also this website has a feature of Canvas, which can be used as a blackboard by the teachers.
Including that, this website also contains a doubtbox where students can type in their doubts or answer to teachers questions while the lecture is going on.
Again this website also has a feature of tab counting, in which, tab change count of every student is shown to the teacher. This will ensure that every student is paying attention to the lecture.
Also, teacher can ask questions in between the lecture, similar to how teacher asks questions in a classroom.
How I built it
1) The main component in building this is the open source tool called WebRTC i.e. Web Real Time Communication. This technology allows screen, webcam and audio sharing between browsers.
2) Secondly Vuetify a very new and modern framework was used for the front end design, routes and lucid front end animations.
3) Last but not the least NodeJS was used at the backend to write the API's which connect and interact with the MongoDB database.
Challenges I ran into
The hardest part of building this website was to find a
open source
tool to achieve screen and audio sharing. This is because Covid crisis has affected most of the countries economy due to lockdown. Hence, it is of utmost important that schools and colleges do not need to pay for conducting lectures.
Accomplishments that I'm proud of
I am basically proud of developing the complete project from scratch and the thing that anyone who has the will to connect to students and teach them can use it freely.
Also in other applications, there is no way to know if the student is actually concentrating at the other end. Here, the features like 'Tab Change Count' and 'Ask a Question' make in possible.
What I learned
I learned a new technology called WebRTC which I believe that is going to help me more than I expect in future.
What's next for Divoc
Integrating an exam module and allowing teachers to take exams from home.
Built With
mongodb
node.js
vue
webrtc
Try it out
divoc-app.herokuapp.com
github.com |
9,988 | https://devpost.com/software/social-distancing-supervision-system-sdss | Social Distacing Supervision System
Coronavirus quick-spreading has forced nations to use all tricks in the book to contain it. A wide range of technologies has been used in their fight against the global pandemic, from applications that collect data to track the spread of the virus to 3D printed ventilators for hospitals.
As Ray Kurzweil said technology is the only thing that helped us overcome our problems, we use our tools to extend our range of possibilities, our minds, and our "mindwares", and as the philosophers, Andy Clark and David Chalmers talk about technology as a scaffolding that extends our thoughts, our reach, and our .
Driven by our passion for technology and with the increasing number of Coronavirus infections and the limited testing resources worldwide, flattening the curve is only achievable through collaborative efforts and social distancing became a necessity to survive.
Social distancing implies changing our day-to-day routine to reduce close contact with others. While general social distancing's rules are easy to maintain on sidewalks, it can be more stressful for business owners in job sites and grocery stores. Moreover, it complicates the government's responsibility to crowded active institutions and public places.
On this matter, we will address the problem of unmonitored crowds during the Hackathon by developing the SDSS as the abbreviation of the Social Supervision System. It is a real-time software based on image processing to spot crowd density. The system will take as input real-life videos from different crowded places to analyze, identify, and report the measured distance status between presented individuals in the given frame. As a result, an alert will be sent to the supervisor in case of distance violation. Thus, preserving the social rules promoted by the world health organization.
Teamwork makes the dream work, but sometimes it can be a challenge. Especially when successful teamwork is all about connecting with teammates, and with a common goal or purpose. While remote work is physically possible, it’s not the optimal way for our team to engage.
As our daily lives are disrupted, we have learned to build our own routine and create a remote work ambiance and find time to laugh with each other. Communicating through a digital medium without shared physical context made developing the project more challenging especially with breaking up connectivity every once and a while, but we have managed to re-read what we wrote and what we explained and we have always ensured that all team members are on the same page and no assumptions were slipped through without noticing and those little tricks make a world of difference.
And eventually what we must learn and keep in mind is that we are at a pivotal point in history because now we've decommissioned natural selection, and we are now the chief agents of evolution and we have to maintain this position with all our collective efforts.
Our strong and endless will lies within the next steps with what we are planning for SDSS. That includes improving its ability for crowd detection and faster alerts. We also would like to build a Web and Mobile application to extend our use cases to make it useful by not only governments and business owners but also by simple individuals.
Built With
angular.js
opencv
python
Try it out
github.com |
9,988 | https://devpost.com/software/hospo-ai-powered-by-ai-blockchain | Inspiration
We are combining telemedicine with blockchain technologies for storing EHR data, and use computer vision and natural language processing models to save time.
What it does
We focus on combining AI, Blockchain, and Telemedicine.
Doctors can access to many deep learning models from our repo
With Blockchain, unique id, and smart contracts can be made for easy data sharing
Heart rate, fitness, and other stats can be retrieved from smart devices
Computer Vision Models:
dermatology, radiology, ophthalmology, pathology, & many more
NLP Models
QA (bioRxiv), automatic speech recognition for accurately transcribe patient visits, QA from EHR
Data Sources
X-rays, Mammograms, MRI, EHR, Journals, Speech
Healthchain
Tests, consultation, data from smart devices can be added to the health chain
What's next for hospo.ai - Powered by AI & Blockchain
Web and mobile apps
Built With
ai
blockchain
computervision
natural-language-processing
Try it out
github.com |
9,988 | https://devpost.com/software/providing-vulnerable-workers-with-legitimate-job-postings-5w9bdj | Inspiration
COVID-19 pandemic is affecting economies in every continent. Unemployment rates are spiking every single day with the United States reporting around 26 million people applying for unemployment benefits, which is the highest recorded in its long history, millions have been furloughed in the United Kingdom, and thousands have been laid off around the world.
These desperate times provides a perfect opportunity for online scammers to take advantage of the desperation and vulnerability of thousands and millions of people looking out for jobs. We see a steep rise in these fake job postings during COVID-19.
In the grand scheme of things, what may start off as a harmless fake job advert, has the potential of ending in human trafficking. We are trying to tackle this issue at the grassroot level.
What it does
We have designed a machine learning model that helps distinguish fake job adverts from genuine ones. We have trained six models and have drawn a comparison among them.
To portray how our ML model can be integrated into any job portal, we have designed a mobile application that shows the integration and can be viewed from the eyes of a job seeker.
Our mobile application has four features in particular:
1) Portfolio page: This page is the first page of the app post-login, which allows a job seeker to enter their employment history, much like any other job portal/app.
2) Forum: A discussion forum allowing job seekers from all around the world to share and gain advice
3) Job Finding: The main page of the app which allows job seekers to view postings that have been run through our Machine learning algorithm and have been marked as real adverts.
4) Chat feature: This feature allows job seekers to communicate with employers directly and discuss job postings and applications.
How we built it
We explored the data and provided insights into which industries are more affected and what are the critical red flags which can give away these fake postings. Then we applied machine learning models to predict how we can detect these counterfeit postings.
In further detail:
Data collection: We used an open source dataset that contained 17,880 job post details with 900 fraudulent ones.
Data visualisation: We visualised the data to understand if there were any key differences between real and fake job postings, such as if the number of words in fraud job postings was any lesser than real ones.
Data split: We then split the data into training and test sets.
Model Training: We trained various models such as Logistic regression, KNN, Random Forest etc. to see which model worked best for our data.
Model Evaluation: Using various classification parameters, we evaluated how well our models performed. For example, our Random Forest model had a roc_auc score of 0.76. We also evaluated how each model did in comparison to the others.
Immediate Impact
Especially during but also after COVID-19, our application would aim to relieve vulnerable job seekers from the fear of fake job adverts. By doing so, we would be re-focusing the time spent by job seekers onto job postings that are real, and hence, increase their chances of getting a job. An immediate consequence of this would be decreasing traffic onto fake job adverts which would hopefully, discourage scammers from posting fake job adverts too.
Police departments don’t have the resources to investigate these incidents, and it has to be a multi-million-dollar swindle before federal authorities get involved, so the scammers just keep getting away with it. Hence our solution saves millions of dollars and hours of investigation, whilst protecting the workers from getting scammed into fake jobs and misused information.
Revenue generated
Our Revenue model is based on:
1) Premium subscription availability to job seekers to apply for jobs
2) Revenue from the advertisements
3) Commission from the employers to post the jobs
Funding Split
1) Testing and Development: $ 10,000
2) Team Hire Costs: $ 2000
3) Patent Application Costs: $ 125
4) Further Licensing conversations: $ 225
TOTAL: $ 12,350
Future Goals
We would hope to partner up with LinkedIn or other job portals in a license agreement, to be able to integrate our machine learning model as a feature on their portal.
Built With
adobe
machine-learning
python
Try it out
github.com
xd.adobe.com |
9,988 | https://devpost.com/software/covid-hero | chifa - the covid hero chatbot
web page
chatbot chat session
Inspiration
Since the outcome of fast-growing pandemic Covid-19, it has become important for us as an individual to keep a minimum distance from other well beings that we call social distancing. It has become important to know the precautions and so one may face many doubts and queries related to COVID. To keep us aware and to answer the COVID related queries we have developed a chatbot called COVID-hero that will guide you throughout this global disaster.
How we built it
This app lets you know the coronavirus cases in your country. It’s more than statistics, it’s also helped the user to know what the Do’s and Don'ts in this pandemic are. It also helps to know the symptoms of COVID-19.
Permit to the invalid persons (blind or illiterate) scan QR and that 'll read for us the notice
Remember the drugs taking time for persons who's has Alzheimer for example
There is also an option for self-screening. You can use it to test yourself whether you have coronavirus or not.
Challenges we ran into
This project deals with API and Dialog flow. API will let you the exact data of the country. Most members we new with dealing with API. They were knowing what API means but didn't do a practical project. This was taking time busting for us. Most importantly, we were having time timezones so our coordination was not establishing among each other. This project needs a lot of attention.
Accomplishments that we're proud of
At last, we planned our whole work which team members have to do what. Due to timezone, we establish a common time we all met and discuss our work. We successfully created the chatbot with teamwork and consistency. As most of them never experienced any hackathon. Everyone shows their teamwork and at last created our project at best.
What we learned
With teamwork we accomplish our project. We learn team collaboration and remotely working. We all learn new technology like trending one chatbot. We learn HTML/CSS and javascript. We deal with APIs and dialog flow. We know how the frameworks work. The most important thing we learn consistency and teamwork which makes this project successful. Our connections increased with this hackathon.
What's next for COVID-hero
We are planning to introduce this website on a large scale. We will be adding more features like Doctor consultancy, showing data through graphs, and informing the nearest hospital with an SMS. We will work on the website and make it more useful by adding a sign-in feature. We will also create a chatbot for the hospital. With data collected we will make more usefull and more intelligent the chatbot to permit anticipate users illness.
Built With
css
dialogflow
html
javascript
node.js
Try it out
covid-hero.herokuapp.com
github.com |