Presence Brush Documentation Video

Akshit Arora, Anna Sophia Wolniewicz, Lillie Bahrami Source code: A-Painter: https://github.com/aframevr/a-painter Sketch RNN: https://magenta.tensorflow.org/sketch-rnn-demo Sketch RNN as a service: https://github.com/Improv-tilt-brush/smartgeometry Our GitHub: https://github.com/Improv-tilt-brush Script: The user experience goal of this project was to build a creative, immersive, and engaging application…

Presence Brush Documentation Video

Source

0
(0)

Akshit Arora, Anna Sophia Wolniewicz, Lillie Bahrami

Source code:
A-Painter: https://github.com/aframevr/a-painter
Sketch RNN: https://magenta.tensorflow.org/sketch-rnn-demo
Sketch RNN as a service: https://github.com/Improv-tilt-brush/smartgeometry
Our GitHub: https://github.com/Improv-tilt-brush

Script:
The user experience goal of this project was to build a creative, immersive, and engaging application in virtual space. We decided to create a 3D drawing app enabling real-time collaboration between a human artist and artificial intelligence.

To accomplish this, we chose to modify APainter, an open-source 3D painting program by Mozilla. APainter is coded using AFrame, an accessible web framework for developing VR content. Our first step was to examine the massive code base and learn how the various functionalities of APainter were implemented using AFrame’s entity-component-system architecture.

Our next discovery was SketchRNN, which is a Recurrent neural network-based sequence-to-sequence model that takes in a two-dimensional stroke and completes a doodle from it. We used a nodejs server to pass information between APainter and the neural network.

One major challenge was to determine which information from the stroke created in APainter should be passed to SketchRNN, which can only be used in lower resolution than a-painter. For this project, we only passed in the information of every tenth point in the stroke, and reinserted it on return to the 3D space.

Finally, on save, the modification takes in a single 3D stroke, converts it to 2D, uses SketchRNN to generate the resulting strokes, and places them in the scene. This introduces more fun and spontaneity into the creative process.

In the future, we would love to develop a model that makes full use of all the 3D information with no conversion to 2D. We would also like to implement multi-user functionality. These features would take full advantage of the virtual environment and allow collaborative remote drawing.

The three of us have really enjoyed and learned so much from this process. Above all, we have learned how much we have yet to learn.

0 / 5. 0

Leave a Reply

Your email address will not be published. Required fields are marked *