We Generated P5-Code Based on Text Inputs with OpenAI’s GPT-3 and This Could Show the Future of Designer-Machine Co-Creation

Dr. Sebastian Loewe
6 min readMar 7, 2021


We had the opportunity to test-run OpenAI’s massive transformer network GPT-3, that already made huge waves in the human-computer interaction and interaction design community. The Munich NLP Group hosted a digital hackathon at the beginning of March 2021 and offered access to the model for people interested in experimenting with it. We initially started the hackathon by building on an idea by Sharif Shameem who used GPT-3 to produce JSX and HTML code.

Here is a little bit of technical background to understand the basics: The transformer model is a very deep recurrent neural network with almost two billion nodes that uses a few shot learning paradigm. This basically means that users provide a couple of examples (also known as prompts) to steer the model in the direction they want it to go. Since it is so deep, it obviously is able to generalize across very different domains. So, in our case the model can map a text input to the respective HTML code. To do so, we used the Text-to-command setting within the web interface. OpenAI allows to use different versions of GPT-3 that serve different functions of the model. Text-to-command is a sort of question-and-answer mode where you show the model some gold standard questions and answers. So, for the HTML task we did this:

The initial idea of letting GPT-3 create HMTL code.

By testing and experimenting, we stumbled across the fact that GPT-3 also ‘knows’ P5.js/Processing code — which is not that commonly know so far. As a visual and interaction designer/educator working with P5.js I was surprised and wanted to know if it actually produces valid code. I used these two prompts to educate the model:

Q: drawing 10 pixel thick black lines on a white background every 15 pixels
A: var y = 0; function setup() { createCanvas(400, 400); background(255);} function draw() {strokeWeight(10); line(0, y, width, y); y += 15;}

Q: drawing a circle with 100 pixel diameter where the mouse is and fill it green
A: function setup() { createCanvas(500, 500); background(150);} function draw() { noCursor(); noStroke(); fill(100, 200, 100); circle(mouseX, mouseY, 100);}

The first amazing thing was, that the model generated code that really worked (we pasted it in the browser-based P5 editor to check if it’s correct). The second surprise was that it generated functions correctly that weren’t explicitly covered by the given prompts (like “line” in the example below). Here are our first steps — asking the model to return code for the query “drawing a line across the canvas”.

This was the first test run that actually proofed that GPT-3 can generate valid code.

We were really happy with the outcome. We also ran some open ended queries like “drawing a random shape with random colors” and the model returned interesting code snippets with just these two given prompt examples. Take a look:

This is a test for more vague questions for GPT-3.

Of course, we wanted to test the power and limits of the model and came up with more prompt examples. Here are the prompt examples:

Q: Draw a green circle on a light grey background where the mouse is
A: function setup() { createCanvas(500, 500); background(150); } function draw() { fill(50, 250, 0); circle(mouseX, mouseY, 100); }

Q: Draw a medium blue circle and small yellow circle with no outline on a light cyan background where the mouse is
A: function setup() { createCanvas(500, 500); background(‘lightcyan’); } function draw() { noStroke(); fill(‘blue’); circle(mouseX, mouseY, 50); fill(‘yellow’); circle(mouseX, mouseY, 20); }

Q: Draw a line of small white circles with a thick outline horizontally on a light grey background
A: var x = 0; function setup() { createCanvas(400, 400); background(220); } function draw() { strokeWeight(10); for (var x = 0; x <= width; x += 40) { circle(x, 200, 25); }}

Q: Draw a blue medium sized square if the mouse is on the left side of the canvas and a pink circle if the mouse is on the right side of the canvas
A: function setup() { createCanvas(400, 400); } function draw() { background(120); if(mouseX >= 200 & mouseY >= 200) { fill(255, 0, 200); ellipse(300, 200, 100, 100); } else { fill(0, 110, 200); rect(100, 100, 100, 100); }}

Q: Draw ten circles on a random position around the current mouse position with random colors and hide the mouse cursor and draw no outline
A: function setup() { createCanvas(800, 600); background(190);} function draw() { noCursor(); for (var i = 0; i < 10; i++) { var x = random(mouseX-30,mouseX+20); var y = random(mouseY-20,mouseY+30); var r = random(50, 60); fill(random(250), random(250), random(250), 100); noStroke(); circle(x, y, r); }}

Q: Draw thin black line on light grey background from previous mouse position to current mouse position
A: function setup() { createCanvas(400, 400); background(220);} function draw() { line(pmouseX, pmouseY, mouseX, mouseY);}

Q: Continuously draw horizontal black lines across the canvas
A: var y = 0; function setup() { createCanvas(400, 400); background(255); } function draw() { strokeWeight(10); line(0, y, width, y); y += 15;}

Q: Continuously draw diagonal colorful lines across the canvas and blend them together
A: var r = 0; var g = 0; var b = 0; var y = 0; var z = 0; var s = 0; function setup() { createCanvas(800, 800); background(255); colorMode(RGB, 300); blendMode(DIFFERENCE); } function draw() { strokeWeight(s*0.5); stroke(r, r, g); line(0, y+10, width, z+10); stroke(r, r, b); line(0, z+10, width, y+10); stroke(r, b, b); line(0, y, width, z); stroke(r, g, g); line(0, z, width, y); z+=20; y+=10; r+=10; g+=5; b+=10; s++;}

Q: draw a small rectangle that wanders randomly across the canvas changing its color
A: var x = 500; var y = 300; function setup() { createCanvas(800, 600); background(20); rectMode(CENTER); colorMode(RGB, 600);} function draw() { if(x > 0 && x < width && y > 0 && y < height){ x = x + (random(-10,10)); y = y + (random(-10,10)); noStroke(); fill(x, y, 0); rect(x, y, 10, 10); } else { x = random(100, 600); y = random(200, 400); }}

Q: A button that changes the background color randomly
A: var button; function setup() { createCanvas(400, 400); background(0); button = createButton(‘changes BG color’); button.position(20, 419); button.mousePressed(changeBG);} function changeBG() {var val=random(255); var val2 = random(255); background(val, val2, 200);}

Q: Draw a vector shape with diagonal lines
A: var outline = []; function setup() { createCanvas(800, 500); background(30); outline.push(createVector(100, 100)); outline.push(createVector(150, 350)); outline.push(createVector(600, 300)); outline.push(createVector(650, 150)); beginShape(); for(var i = 0; i < outline.length; i++) { vertex(outline[i].x, outline[i].y); } endShape(); }

Q: Draw a layout grid with three greenish rectangles that adapt to the canvas size
A: function setup() { myCan = createCanvas(600, 800); } function draw() { const margin = height / 20; const imgWidth = width — 2 * margin; const allHeight = height — 4 * margin; const imgHeight = allHeight / 3; background(240); noStroke(); fill(30); fill(75, 185, 165); rect(margin, margin, imgWidth, imgHeight); fill(120, 155, 155); rect(margin, margin + imgHeight + margin, imgWidth, imgHeight); fill(30, 50, 50); rect(margin, margin + 2 * (imgHeight + margin), imgWidth, imgHeight);}

With this in store, we tested a variety of queries and questions. The more concrete and specific the question, the more precisely GPT-3 works. Asking it to render two red rectangles often resulted in exactly this. But where’s the fun in using this model to let it do, what you — as a designer — already have in mind? We consider GPT-3 more like an inspirational tool, giving designers surprising and unforeseen options. Hence we also tested if the model could generate code for vague and metaphoric questions. It turns out, GPT-3 isn’t that bad in doing so. See for yourself. We asked it to come up with “something astonishing”.

This is proof of concept that vague and metaphoric language actually works for creating interesting outcomes.

This is pretty cool, right? We don’t think that this renders creatives obsolete in the future, but could instead rather give rise to a different experimental design practice. Since some of the code created by GPT-3 does not make sense from the beginning, it could be fun and rewarding for designers to figure out what the model “intended” — and by re-editing and re-mixing the initial code lines building on the “idea” of the model. Ideally, this would form a new way of human-and-machine co-creation. The last video demonstrates this vision in a rather preliminarily state.


Thanks to Kun Lu and the Munich NLP Group for the support.



Dr. Sebastian Loewe

Professor for design & management at Mediadesign University Of Applied Sciences, Berlin, Germany.