Misc

ChatGPT: interactive Wikipedia that really know how to code

Reading Time: 2 minutes

It seems that there is a endless supply of large language models being applied to interesting situations. This time it’s a lot closer to everyday life than the previous models. OpenAI released the open source beta of ChatGPT, a chatbot backed by GPT3.5, that can answer scientific questions about the world, generate recipes, and write better than average code. Here is the blog to explains how it works. https://openai.com/blog/chatgpt. And if you have an OpenAI account, here is the beta. https://chat.openai.com/chat

I was fortunate enough to get to test the product this week and it’s surprisingly user friendly. There is just something about the chatbot framework that really intrigues me. Maybe it’s my strong urge to engage in interesting conversations. It’s able to answer complicated scientific questions regarding quantum mechanics, to biological pathways, to mathematic concepts. You do have to ask it to give examples and further explain to get more detailed information. However, if you are an expert in the field, you may find the information too shallow. For example, I published scientific papers previously on the biological pathways that control pupil dilation in rodents. I was not able to get detailed information on the level of scientific papers. This might not be a bad thing, since a flawed model called Galactica was introduced a few weeks ago. It was trained on scientific papers to generate texts like scientific papers. The questionable outcome is authoritative text with obviously wrong information. Being humble works better in this case.

I also tested on math concepts such as Taylors series and Fourier’s transform. It was able to give good explanations and examples. Another strong suit the model provide is the ability to generate above average programming code. It’s not surprising since previous GPT models have been used in the Copilot product to generate code and documentations. Yet it’s still nice to see the model include documentation and explanations of the generate code. On the side of more daily tasks, it is also able to generate cooking recipes that seems to be reasonable. Although I have not test the actual recipes yet.

Regarding limitation, I found some questions that model refuse to give an answer or not able to give an answer. For example, when I ask it to give ethically questionable instructions, the model will refuse to give those instructions. Or if I ask the model to tell me about things that changes or hard to determine, then it will refuse to answer. For example, if I ask what’s the bright start next to the moon is called, it will tell me that it depend on time and location.

All and all, it feels like a interesting tool to test and do more research. People have mentioned that this model feel like a reasonable educational tool when developed properly. And since there are so few AI product applied to education, I really wish more educational product can be developed using this model.