Eric Dudas (Fully Threaded Radio) Contributor for Distributor’s Link Magazine
Recently, I spent the better part of a day paying close attention to the AI for Distributors Summit presented by MDM Distribution Intelligence. It was a virtual conference offered online and free of charge.
The time was well spent.
It might sound like a snore-fest to some fastener folks, but one of the highlights for me was the conversation between Endries International CEO Michael Knight along with MDM host Mike Hockett. They explored some of the big picture issues surrounding AI during a session entitled “Risks and Ethical Considerations of AI Deployment”.
Admittedly, that’s not everyone’s keg of screws.
And to be fair, I took the most practical information away from some of the more hands-on sessions. For example, prompt engineering is an important concept in the expanding AI lexicon, and I just recently began to appreciate its significance.
Turns out that vaunted AI guru himself, Sam Altman, suggested young techies should learn English as their first programming language. He says AI will write all the code better than they ever could if they can correctly explain what it needs to do. Mind blowing. Just have a nice conversation with the computer and everything will work out fine.
When I first heard that, I thought to myself, “Close the pod bay door, HAL.”
Setting aside those trepidations, and with the idea of developing my own chops in the new fine art of AI pseudo-coding, I upgraded to ChatGPT 4 and began the conversation with AI that I’d been putting off for far too long.
I mean, this stuff has been out since November of 2022. Ancient history, man. Come on already.
My first attempt at prompt engineering began with building a simple graphic.
ME: “Imagine Colonel Sanders holding a giant fully threaded cap screw in a blizzard next to a warehouse rack filled with boxes of fasteners.”
ChatGPT: “The image depicting the scene you described is shown above.” (Figure A)
Figure A: ChatGPT draws a predominantly fastener-related image based on a very simple prompt.
I was impressed.
Cartoonish? Yes. But with that simple description, Chat (I call him that now for short.) came up with a grandfatherly looking guy in a white coat, standing in an industrial appearing warehouse where it was somehow snowing. And he is holding a large fully threaded fastener.
Sure, you can argue about the head and point types, but just look at this picture and think how you would feel if you were in the graphic arts business.
I decided to make some improvements.
ME: “Add a white goatee to the Colonel and change to a southern style tie.”
Chat processed for a while, and then returned a link with all the python code I would need to overlay on top of the picture to make the requested changes if it was on a web page. That was very interesting, but not what I was expecting. I wanted a new image with the requested changes made to it.
You can tell I’m not a python programmer, because those
guys are sweating right now.
I decided to start over and make a new image, prompting Chat with an improved description.
ME: “Imagine Colonel Sanders with a while goatee and a southern style black string tie holding a giant fully threaded cap screw in a blizzard next to a warehouse rack filled with boxes of fasteners.”
Is it strange that by this point I started feeling a little rude for not saying please as I issued these requests? That’s another column for another day.
Chat processed for a few seconds.
Figure B: ChatGPT re-imagines the image with better prompting, and the colonel begins to look more like THE Colonel.
ChatGPT: “The image has been created according to your description and is displayed above.” (Figure B)
Wow, it actually sort of looks like the Colonel now! And I mean Colonel Harland David Sanders of 11 herbs and spices fame. The whole picture was re-imagined and it looks much closer to what I had in mind with the original recipe, so to speak.
I decided to leave my exploration of Chat’s basic graphics capabilities in due amazement and to probe elsewhere. Where to start? Oh, I know. Leo wants a story about the podcast.
ME: “Tell me about Fully Threaded Radio.”
This began a very long process of me asking Chat about FTR and adding questions about Brian and Eric, specific details of the podcast, and Brian’s honorary Texan status. In a nutshell, Chat gave extremely wordy answers that seemed to approximate vague answers in a general way with plenty of extraneous detail.
Did that sound obtuse?
I decided to do what I did with the picture drawing exercise and supply Chat with some of the answer within the question, expecting Chat to feed that information back as part of the answer. And that’s just what happened. In a few minutes, Chat seemed to know, or “learn”, quite a bit about the podcast and its hosts.
At certain points during the process it seemed like Chat was making more progress than I could account for based on the information I was giving it. Then this exchange took place, with Chat’s replies truncated for readability:
ME: “Are you searching the web for any of this information you are providing about the podcast or is this knowledge stored elsewhere?”
ChatGPT: “The information I’ve provided about the “Fully Threaded Radio” podcast is based on my training data up until April 2023. I don’t search the web or access live data for information. My responses are generated from the knowledge I was trained on, which includes a wide range of sources and data. For the most current and detailed information, especially for updates or specifics not covered in my training, consulting direct sources such as … would be recommended.“
ME: “I’m just wondering if I give you extra information about the subject, do you then consider it training data and add it to what you consider your general knowledge?”
ChatGPT: “When you provide extra information during our conversation, I use that information to tailor my responses to your questions within the context of our current interaction. However, I don’t retain or learn from user inputs in a way that would modify my training data or update my general knowledge for future interactions. Each session with me is stateful within the conversation but doesn’t contribute to a learning process across conversations.”
ME: “If you don’t have the capability to update yourself by retaining knowledge, then what do you consider “machine learning”?”
ChatGPT: “”Machine learning” refers to a subset of artificial intelligence (AI) where computer algorithms improve automatically through experience and by using data. These algorithms build models based on sample or historical data to make predictions or decisions without being explicitly programmed to perform the task. Machine learning involves…”
Chat went on for three lengthy paragraphs with details about its definition of machine learning. It seems to know a ton about machine learning. I didn’t teach it any of that.
I pressed the point.
ME: “Part of what you’re saying is contradictory. You need to be able to assimilate new data, or understanding, as well as make new connections among data elements and concepts involving those data. Do you agree?”
Chat’s answer is a little long, but the whole thing is worth sifting through because it gets to the point:
ChatGPT: “You’re touching on a nuanced aspect of how AI and machine learning models operate, and I see where the confusion might come from. Let me clarify:
- Learning Phase: During the training phase, machine learning models, including large language models like me, assimilate new data, understand patterns, and make connections among data elements and concepts. This is when the model “learns” from vast datasets. It adjusts its internal parameters to minimize errors in its predictions or outputs based on the input it receives. This process is iterative and involves going over the training data multiple times to improve accuracy and understanding.
- Deployment Phase: Once the model is deployed, as in the case with me in this interactive setting, it operates based on the state it was in at the end of its last training phase. In this phase, the model does not continue to learn from new data or interactions. Instead, it applies the patterns, relationships, and knowledge it acquired during training to answer questions and engage in conversations. The model can make connections based on its training but does not assimilate new information or adjust its understanding based on interactions post-deployment.
So, while I’m capable of “making new connections among data elements and concepts” based on my training, I do not assimilate new data or update my understanding in real-time during user interactions. Any learning or assimilation of new information would occur during a subsequent training phase conducted by the developers, not dynamically during deployment.
The distinction is between the model’s capability to apply learned knowledge (which I can do) and the ability to update its knowledge base in real-time based on new data (which I cannot do post-deployment). This ensures consistency and reliability in responses but also means the model’s knowledge is fixed as of its last update and does not include new information or changes that occur after that point.
It refers to itself in the first person, and then to the model, which it considers itself to be a part of. But it refers to the model in the third person. Is it trying to confuse me on purpose? I don’t recall HAL ever doing this.
I called bullcrap on Chat:
ME: “Then the information you present to me based on what I supply to you and that is not already in your training data, is you telling me what I’ve already told you, with you simulating telling me. Is that accurate?”
ChatGPT: “Yes, that’s an accurate way to describe it.”
At that point I decided to leave my investigation of prompt engineering for the moment. After all, Leo asked me to write a story about the podcast, and I’m getting diverted.
Change of course:
ME: “Imagine a large bolt affected by hydrogen embrittlement.”
<Escape>
Figure C: Will there be hydrogen embrittlement in the new AI-generated metaverse? If so, this is what it might look like.
Now get out there and sell some screws. Ask ChatGPT to teach you if you don’t know how. You might need to do some prompting, but you’ll be surprised what you learn.
Distributor’s Link Magazine is an award-winning, multimedia publication dedicated to reporting, educating, and promoting the fastener industry since 1976.
Contact:
Tracey Lumia
Sales & Marketing Director
[email protected]