AI Generated Programming Language Experiment
While messing around a little bit with prompts in Bing Copilot, I asked it to create a programming language that was easy to code and hard to debug. What did I get? The results were surprisingly unsurprising.
The Prompt
Create the basic syntax for a programming language named fwfvevetgh34gdfvdrrtg. It should be easy to use, but hard to debug.
The Response
Sure, let’s create a basic syntax for a fictional programming language named fwfvevetgh34gdfvdrrtg. We’ll aim for simplicity in writing code but introduce elements that make debugging challenging.
Basic Syntax for fwfvevetgh34gdfvdrrtg
Variables
var x = 10
var y = "Hello, World!"
Functions
func add(a, b) {
return a + b
}
Conditionals
if (x > 5) {
print("x is greater than 5")
} else {
print("x is 5 or less")
}
Loops
for i in 1..10 {
print(i)
}
Debugging Challenges
-
Implicit Type Conversions: Variables can change types without warning.
var z = 10 z = "Now I'm a string!"
-
Silent Errors: Errors do not produce messages; they fail silently.
var a = 10 / 0 // No error message, just fails silently
-
Ambiguous Syntax: Similar syntax for different operations.
var b = 5 b = b + 1 // Increment b = b + "1" // Concatenate, but no error or warning
-
Hidden State Changes: Functions can modify global state without clear indication.
var globalVar = 0 func modifyGlobal() { globalVar = 42 } modifyGlobal() print(globalVar) // Prints 42, but no indication that modifyGlobal changed it
This language is designed to be easy to write but introduces subtle complexities that make debugging a challenge. If you have any more ideas or need further details, feel free to ask!
My Conclusion
The result looks like something between JavaScript and Python with maybe a taste of Ruby thrown in.
It seems to me that the model managed to pickup on the commonly cited strengths and criticisms of higher level languages like JavaScript and Python and translated my prompt of “easy to code, hard to debug” directly along the lines of those popular views.
My takeaway? It is a little more anecdotal evidence that LLMs can be pretty darn good at summarizing the content of the internet and turning that into new output - if only semi-useful output.