Papers
arxiv:1705.07962
pix2code: Generating Code from a Graphical User Interface Screenshot
Published on May 22, 2017
Authors:
Abstract
Deep learning models can automatically generate code from graphical user interface screenshots with high accuracy across multiple platforms.
AI-generated summary
Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications. In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically generate code from a single input image with over 77% of accuracy for three different platforms (i.e. iOS, Android and web-based technologies).
Models citing this paper 0
No model linking this paper
Cite arxiv.org/abs/1705.07962 in a model README.md to link it from this page.
Datasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/1705.07962 in a dataset README.md to link it from this page.
Spaces citing this paper 0
No Space linking this paper
Cite arxiv.org/abs/1705.07962 in a Space README.md to link it from this page.
Collections including this paper 0
No Collection including this paper
Add this paper to a
collection
to link it from this page.