Sunday, January 12, 2025

Random Thought on AI from someone in the REAL WORLD

Guest Post from Nick Sovich. 

-----------------------------------------------------------------------------

Bill Gasarch recently blogged on RANDOM THOUGHTS ON AI here . He is in the realm of theory. I am in the realm of applications so I asked if I could do a post on AI from that background. He agreed, and here's the post:

1) Words Matter

Today you tell a foundation model who to be and how to behave by giving it a system prompt and a chat prompt. The system prompt is the AI analog of 

 I’m a CS professor, I’m an expert in Ramsay theory, and my goal is to help students learn. 

The chat prompt is the AI analog of


Hi CS Professor, can you please help me write a blog post to formulate my thoughts on AI”?

Words comprise these system prompts and chat prompts. Getting the right words is very important, in the same way writing the right lines of code is important.

So if you have an AI that *can* be any 157-level IQ expert in any field, you still haveto tell it what kind of expert to be, what its worldview is, and how to communicate. Because it can’t have infinite world views and communication styles all at once.

Example System Prompt and Chat Prompt is here.


 
2) If English becomes the #1 programming language, then what’s the difference between a poet and a computer scientist?

There has been talk in the industry of English eventually becoming the number 1 programming language. What does that mean? A poet has mastery over the English language. A computer scientist has mastery over programming languages. A poet can create works of art.  A poet can be creative, but so can a computer scientist. A computer scientist can be pragmatic, but so can a poet. Will poets become better programmers than computer scientists?  Will computer scientists become better artists than poets? Does it matter?

3) What is code and what is poetry?

Bad code doesn’t compile, or it compiles but doesn’t run, or it runs but has bugs. Bad poetry doesn’t express the author’s intent, or it doesn’t evoke the reader’s emotion, or it doesn’t convey a message to society. Bad code is still code, bad poetry is still poetry. Humans can produce good and bad code and good and bad poetry. AI can produce good and bad code and bad poetry. Can AI produce good poetry?


4) AI is a tool, like auto-complete. 

This is both good and bad.

Under use the tool, and you waste a lot of time.

Over use the tool and you don't understand your own product.

5) Where should AI start and end in academia?


This is an important question worthy of a separate blog that I might do in the future.

6) AI can either replace people or empower people. 

We should look for ways that it empowers people. A nice 38 second clip about that here.

As an example:

Good: Giving the vet tech tools to record my intake questions through natural language instead of pen and paper.

Bad: Automating away all vet tech jobs and replacing them with humanoid robots that would scare my dog anyway. It’s traumatic already when a human pins her down to give her eye drops.

Example System Prompt and Chat Prompt here.

7) How many tokens and how much test-time computation would it take to fully capture my dog’s personality?

First of all, the real thing is better. To see what the latest models are capable of, I used the reasoning models to help generate prompts for the diffusion models, to see how closely I could replicate my dog’s appearance. The reasoning models capture the veterinary terminology very well, and increased specificity in prompting leads to better results from the diffusion models. The diffusion models get close, but don’t seem to have the level of expert veterinary knowledge that the reasoning models do (yet).

Example Prompt and Resulting Image here.

 
8) If spending an hour a day on social media can radicalize you politically, can spending an hour a day reasoning with an AI on technical topics make you more technical?

Spending an hour a day searching the internet for technical topics and reading them can certainly make you more technical. If AI helps you get that information more efficiently, and if the AI actually works (hallucinates less than doing a web search would lead you to incorrect technical information), then it follows that AI can be more efficient than a web search in making you more technical. AI needs to be “aligned”, just like the web search needs to be “unbiased”. And make sure that one of the topics you learn, either through web search or AI, is how to read a map if you lose internet access.

BILL's comment on point 8: While I agree that spending an hour a day (or more) reasoning with an AI on a technical topic is a good idea, make sure you don't fall into a rabbit hole.  For example, while using AI to help me with Ramsey Theory I, on a whim, asked it to write me a poem about Ramsey Theory. It wasn't that good but it was better than what I could do. I won't reproduce it for fear of sending my readers down a rabbit hole.



 



1 comment:

  1. Wonderful post! I had some thoughts that are relevant/build on the points you made:

    - On (2), the first thing that I thought of was the rise of low-code / no-code tools. Do you think there will be a point where the line blurs between true "mastery" over programming languages or the English language? Will we all just become "creators?"

    - On (4), "Underuse the tool, and you waste a lot of time. Overuse the tool and you don't understand your own product."

    I totally agree with this point. A more personal anecdote: When trying to learn a new API that came out, I was able to upload the docs and some samples into an LLM and it gave me a well written template. However, I had to then add my own context of the product to make it useful. I like to think that I used the tool enough where it a) saved time and b) allowed me to still understand what I was building.

    ReplyDelete