Guest Post from Nick Sovich.
-----------------------------------------------------------------------------
Bill Gasarch recently blogged on RANDOM THOUGHTS ON AI here . He is in the realm of theory. I am in the realm of applications so I asked if I could do a post on AI from that background. He agreed, and here's the post:
1) Words Matter
Today you tell a foundation model who to be and how to behave by giving it a system prompt and a chat prompt. The system prompt is the AI analog of
I’m a CS professor, I’m an expert in Ramsay theory, and my goal is to help students learn.
The chat prompt is the AI analog of
“Hi CS Professor, can you please help me write a blog post to formulate my thoughts on AI”?
Words comprise these system prompts and chat prompts. Getting the right words is very important, in the same way writing the right lines of code is important.
So if you have an AI that *can* be any 157-level IQ expert in any field, you still haveto tell it what kind of expert to be, what its worldview is, and how to communicate. Because it can’t have infinite world views and communication styles all at once.
Example System Prompt and Chat Prompt is here.
2) If English becomes the #1 programming language, then what’s the difference between a poet and a computer scientist?
There has been talk in the industry of English eventually becoming the number 1 programming language. What does that mean? A poet has mastery over the English language. A computer scientist has mastery over programming languages. A poet can create works of art. A poet can be creative, but so can a computer scientist. A computer scientist can be pragmatic, but so can a poet. Will poets become better programmers than computer scientists? Will computer scientists become better artists than poets? Does it matter?
3) What is code and what is poetry?
Bad code doesn’t compile, or it compiles but doesn’t run, or it runs but has bugs. Bad poetry doesn’t express the author’s intent, or it doesn’t evoke the reader’s emotion, or it doesn’t convey a message to society. Bad code is still code, bad poetry is still poetry. Humans can produce good and bad code and good and bad poetry. AI can produce good and bad code and bad poetry. Can AI produce good poetry?
4) AI is a tool, like auto-complete.
This is both good and bad.
Under use the tool, and you waste a lot of time.
Over use the tool and you don't understand your own product.
5) Where should AI start and end in academia?
This is an important question worthy of a separate blog that I might do in the future.
6) AI can either replace people or empower people.
We should look for ways that it empowers people. A nice 38 second clip about that here.
As an example:
Good: Giving the vet tech tools to record my intake questions through natural language instead of pen and paper.
Bad: Automating away all vet tech jobs and replacing them with humanoid robots that would scare my dog anyway. It’s traumatic already when a human pins her down to give her eye drops.
Example System Prompt and Chat Prompt here.
7) How many tokens and how much test-time computation would it take to fully capture my dog’s personality?
First of all, the real thing is better. To see what the latest models are capable of, I used the reasoning models to help generate prompts for the diffusion models, to see how closely I could replicate my dog’s appearance. The reasoning models capture the veterinary terminology very well, and increased specificity in prompting leads to better results from the diffusion models. The diffusion models get close, but don’t seem to have the level of expert veterinary knowledge that the reasoning models do (yet).
Example Prompt and Resulting Image here.
8) If spending an hour a day on social media can radicalize you politically, can spending an hour a day reasoning with an AI on technical topics make you more technical?
Spending an hour a day searching the internet for technical topics and reading them can certainly make you more technical. If AI helps you get that information more efficiently, and if the AI actually works (hallucinates less than doing a web search would lead you to incorrect technical information), then it follows that AI can be more efficient than a web search in making you more technical. AI needs to be “aligned”, just like the web search needs to be “unbiased”. And make sure that one of the topics you learn, either through web search or AI, is how to read a map if you lose internet access.
BILL's comment on point 8: While I agree that spending an hour a day (or more) reasoning with an AI on a technical topic is a good idea, make sure you don't fall into a rabbit hole. For example, while using AI to help me with Ramsey Theory I, on a whim, asked it to write me a poem about Ramsey Theory. It wasn't that good but it was better than what I could do. I won't reproduce it for fear of sending my readers down a rabbit hole.
Wonderful post! I had some thoughts that are relevant/build on the points you made:
ReplyDelete- On (2), the first thing that I thought of was the rise of low-code / no-code tools. Do you think there will be a point where the line blurs between true "mastery" over programming languages or the English language? Will we all just become "creators?"
- On (4), "Underuse the tool, and you waste a lot of time. Overuse the tool and you don't understand your own product."
I totally agree with this point. A more personal anecdote: When trying to learn a new API that came out, I was able to upload the docs and some samples into an LLM and it gave me a well written template. However, I had to then add my own context of the product to make it useful. I like to think that I used the tool enough where it a) saved time and b) allowed me to still understand what I was building.
> Do you think there will be a point where the line blurs between true "mastery" over programming languages or the English language?
DeleteI think the line has already been blurred on this for quite some time. But I mean this quite literally.
> Will we all just become "creators?"
Or will we all just become "programmers"?
> A more personal anecdote: When trying to learn a new API that came out, I was able to upload the docs and some samples into an LLM and it gave me a well written template. However, I had to then add my own context of the product to make it useful. I like to think that I used the tool enough where it a) saved time and b) allowed me to still understand what I was building.
I love this anecdote! It seems you were able to strike a balance, which is tricky because where to draw the line evolves as the technology advances.
Re point 2: from the little I've tried using LLM coding tools, I've been impressed (although they aren't infallible). Assuming future coding tools are generating something like machine code, though, I'm guessing there still might be a need for assembler, or Python / C++ / Rust / ...
ReplyDeleteIf some "C++ successor language" evolves, if it's easier for humans to write bug-free code in it, presumably it would also be easier for LLMs to write code in that language. (And presumably LLMs will still need to be able to "think" about code.)
> Re point 2: from the little I've tried using LLM coding tools, I've been impressed (although they aren't infallible).
DeleteAgree! I usually spend more time debugging than coding anyway. How about you?
> Assuming future coding tools are generating something like machine code, though, I'm guessing there still might be a need for assembler, or Python / C++ / Rust / ..
For sure! But might be obfuscated from the "programmer".
> If some "C++ successor language" evolves, if it's easier for humans to write bug-free code in it, presumably it would also be easier for LLMs to write code in that language. (And presumably LLMs will still need to be able to "think" about code.)
These are great points. When it's easier to write bug-free code, it's also easy to write bug-ridden code. Look at all of the compile-time type safety and memory management that Java and Scala give us for example -- JVM language programmers still write code with bugs, the bugs just aren't type errors segmentation faults.
That was a great read!
ReplyDeleteThe comparison to poetry was fascinating and one that I hadn’t heard before, and something that will rattle around my brain for a bit because I’m pretty positive it also can generate better poetry than I can.
I think it made sense to restate that while AI is incredible right now, it still has a way to go before it can be the end all be all (dog example and amount of hallucinations taking place).
Your point for number four was a way simpler and concise, in terms of AI use not capability, when I think of the Centaurs and Cyborgs on the Jagged Frontier study and AI's ability to help companies tackle those B- tasks.
And your point about the Vet Techs I liked because it helps probably ease the mind for those who think they are going to be replaced by machine.
Thanks for sharing!
> I’m pretty positive it also can generate better poetry than I can.
DeleteBetter on what dimension?
> Centaurs and Cyborgs on the Jagged Frontier study and AI's ability to help companies tackle those B- tasks.
This is directly applicable to an effort I am involved with. Thank you for this reference.
> It helps probably ease the mind for those who think they are going to be replaced by machine.
It certainly helped me ease my mind!
1) Isn't English to imprecise to be a programming language?
ReplyDelete2) There already are Robotic Pets and they seem to do pretty well
https://www.aarp.org/caregiving/basics/info-2023/robotic-companion-animals.html
Here is a quote
"The fury animatronic pooch acts like a normal canine--it wags, barks, turns its head in response to voices--but without the need for vet bills, long walks, or you know, picking up poop."
In the future will people care that their robotic pet is ``not real''? For that matter, we already project onto our pets emotions that aren't there.