Artificial Intelligence is not only a technological advance, but a game-changing force that allows us to shape a better future for everyone. In this episode, Jeffrey Palermo, Chief Architect of Clear Measure, dives into the fascinating world of Artificial Intelligence. As a systems expert, Jeffrey shares his insights on what AI is, its significant impact, and the immense potential it holds for the future. He delves into ChatGPT AI, one of the most remarkable developments in AI and its capabilities. He addresses common questions and concerns about AI, including the potential risks and benefits of this emerging technology, and the impact it may have on job security and education. Furthermore, Jeffrey takes us on a journey from the early days of technology to the advancements of AI and how it empowers society. His unique perspective and valuable insights shed light on the development of technology and its role in shaping our world today. Tune in as we explore the exciting and dynamic world of AI with Jeffrey Palermo.
—
Watch the episode here
GPT-AI With Jeffrey Palermo Of Clear Measure
Jeffrey Palermo is our guest. I’m excited to have him here. He is the Chief Architect of Clear Measure, a software architecture company that empowers its clients to develop teams to be self-sufficient, moving fast, delivering quality, and running their systems with confidence. He is a systems guy. We’re so excited to have him join me. He’s my good friend and a brother from another mother. Jeffrey Palermo, it’s good to have you here.
It’s a pleasure to be here. Thanks so much for inviting me on.
It’s a real honor. It would be helpful to let our audience get to know you a little bit. Tell us a little bit about your background. What brought you to where you’re at? What’s something interesting about you?
I was born at a very young age. Besides that, I like to race off-road motorcycles in the local amateur league. I have three kids and a beautiful wife of many years. I love country dancing. I did a lot of teaching. I was part of a performing group way back when. I’ve been a computer programmer since 1997. It’s my first job in 1997. I’ve been loving that ever since.
You have now Clear Measure. That specifically does training and contract programming.
I started that company ten years ago. It had a fast beginning since 2013. In 2016, we received the Austin Fast 50 award. We were listed as the number one fastest-growing small business in 2016. Those are startups that are the fastest-growing at that time. It was hard to grow so fast. We had some rough years after that but it has been great. We’re grateful to have long-term clients and have an impact across all 50 States and even some international clients. We love helping companies empower their internal software teams to be highly effective.
I can’t wait to get into that. As a background, you are a Microsoft .NET-centric programming shop.
The IT world is massive. The software world is massive. We focus on roughly half of the software world that uses Microsoft technologies. Our customers are going to have software engineers in-house that are using Visual Studio. For the CFOs out there, you see a Visual Studio subscription line item when you review all the expenses. That’s the rough qualification of a client. They probably would find us useful if they were buying Visual Studio for their developers.
I want to get into the topic of Artificial Intelligence or AI. Let’s start with getting a definition of what exactly is artificial intelligence.
I like to go back and look at history when thinking about new things. I invite anybody to do this. Go on Wikipedia, look up the artificial intelligence page, read the definition of what it says, and then look back in the history of that page for ten years and then ten more years. You will see that page is constantly changing and the dust has not settled. I don’t expect the dust to settle when we try to continually update the documentation on that, but it all goes back to 1956 at Dartmouth College.
John McCarthy, who’s the Father of Artificial Intelligence, coined the term. He did a summer workshop and got some funding. His hypothesis or pitch to get the funding was, “We could sufficiently identify and describe every feature of human intelligence so precisely that we then could make a machine that simulates that feature of intelligence.” If we take that and extrapolate that forward, that has happened time and time again. We have identified something that only humans could do. We described it, analyzed it, and then built a machine that could do it. We take for granted adaptive cruise control, talking to Siri or Alexa, voice recognition, text-to-speech, and computer vision images.
What we find is that once we have described something very precisely and created a machine to simulate that aspect of what was previously human intelligence, we tend to give it a more specific name and stop calling it artificial intelligence. We don’t say, “Did you put on your cruise control AI in your car?” No. Once we have built the machine to simulate it, we tend to stop calling it AI.
We carve it out and put it into a more functional description of what it does. That’s very interesting.
It’s because we have applied it.
It’s also interesting how long it has been around and the definition of the theory of what it was. When you look at it, there is a lot that has been accomplished but where people take it is where we get messed up. It’s under development. There’s a lot going on. On November 30th, 2022, the most significant development in AI started with a new product called ChatGPT AI. I want to get into that. It seems like a leap forward. What is going on with ChatGPT? What is unique about it?
There’s an organization called OpenAI. What you’re talking about is the URL, Chat.OpenAI.com. Microsoft invested $1 billion or $2 billion in that.
It is a little bit of an investment.
It was taking that full technology and has plans of working on embedding it into Windows and Microsoft Office. If anyone has played around with it, all those are going to be ubiquitous on desktop computers and probably phones in short order. GPT stands for Generative Pre-Trained Transformer. It layers onto all of the advances in machine learning whereby you feed into a big machine learning model a whole bunch of data that is labeled in very specific ways. That’s the pre-training part.
On top of that, the generative aspect is a huge breakthrough. We have been able to ask a question and get responses for a long time with Siri, Alexa, and whatnot but generative is taking this machine learning model and generating what appears to be new information by being conversational. It generates new paragraphs that didn’t exist before. They’re not giving us back access to information.
It’s not pointing as Google does when you put it on Google, Viewsbank, or whatever search engine you’re using. When you put in a search, it doesn’t bring back recommended websites. It goes in and searches for this. There’s one thing that you pointed out when you and I first started talking about this. All of this is so new. November 30th, 2022 is the adoption of this thing and the people playing around with it. You pointed out that they took all the data on the internet and dumped it into ChatGPT or OpenAI through 2021. That means we’re limited moving forward with everything that has been developed or added to the internet since that point.
That is correct. Specifically, a lot of information has been since taken out. I don’t think anybody has the full information but some of the reports from the early testing of this showed that it gave responses that they didn’t want it to give. Some of the responses were reported to be vile and toxic. What they realized was we can’t put all of the available knowledge of the world into this because there’s some bad information out there. If we think of all of the books on the planet, we think, “Books are good,” but there are a lot of filthy pornographic novels out there. Do you want examples of writing to be that type of writing when people are getting help with how to describe things?
There was a write-up of the process. There was a massive project that happened to censor and edit the information that went into this. There were some contractors that hired a lot of people. There were reports of the mental problems that all of these employees went through. Think about it. If your job is to look at bad content and label it as bad, you’re ingesting and focusing on the worst of the worst that’s out there to be looked at. It’s a psychological problem. This information has been highly censored because of that so they have chosen what they want to be in this model.
Thankfully, they did that. The last thing my eyes want to see or read is something foul and distasteful. I know that is on a spectrum that one man’s pornography is another man’s art. We understand that there is something. How do they navigate that? It is what it is. We’re using the tool with a lot of that bad stuff out of there. That’s a whole other topic on what that did, what that caused, and why.
It’s important to notice that this is the architecture of this type of system. A person has to choose what data to put into this model and has to choose how to label that data. These types of systems will always be biased by the nature of the architecture because there’s that choice. That’s just the design. It’s not good or bad. This is a general-purpose one that these organizations are building but more people will be building their own. They will choose what data to put in and how to label the data that is put in. You will have specialized models.
That’s where it starts getting into some of the applications and the power of what this could be. We’re already seeing this with Amazon’s Alexa. It’s called Artificial Intelligence. That came out years ago. Tie ChatGPT and Alexa and give us a trajectory of where we’re going to be heading.
Microsoft has already announced that it’s going to be on the desktop and in Microsoft Word.
When you say it’s going to be in Microsoft Word, are there any insights into how that’s going to be in there? You’re not behind the veil, so you can’t understand. I’m sure based on what you’ve read, there’s some anticipation of what could be there. Can you give us some examples?
Microsoft Word is used to make documents. The chat application on top of GPT-3 and the experimental GPT-4 is already demonstrating that. It can write paragraphs, outlines, and computer code. Microsoft Word is a natural extension where you jam out an outline for a document in Microsoft Word. Maybe you put some bullet points of the points that you want to make in every section of your outline and then hit a button that says, “Go.” Microsoft Word fills out the pros with good grammar to make a document of your points. You make sure that you’re happy with it. That is a very small step to make that possible from what we’re already seeing being demonstrated.
It brings up the factor of AI risk. Is this a risk in education? Do we have to reevaluate what we’re doing in education? We have already heard about how college kids looking to make it easier are using ChatGPT to write their papers. Universities are trying to stop it and going, “Should we be stopping this? How should we be viewing this?” I would love to get your thoughts on the risks versus assets that we have here with this ChatGPT.
It makes me think about history again because if we know history, then we can know what to expect. If we don’t know history, then we think that everything about this is new and novel. We are either fearful of it or we’re saying, “This is new. This changes everything.” There have been huge inventions that have come along the way that have changed paradigms.
If we know history, then we can know what to expect. If we don't know history, then we think that everything about this is new and novel. Share on XSome people have resisted it and some people embraced it. Different jobs were eliminated. New jobs were created. There were companies that used to create horse whips. Most of them are out of business because buggies are not drawn by horses anymore. We have automobiles. We don’t necessarily need as many of those now. I learned to type on an electric typewriter in the seventh grade.
I’m so old. I learned in a manual but that’s all right. We won’t get into that.
I have a manual at home. My kids know what it is to type on a manual and to try to unstick those keys.
I have one where the letters come up. It’s an old Underwood. It’s a collector’s item. I have that and the Selectric typewriter. I have my first IBM PC with dual floppies in there. I’m developing a museum of all this technology and how things work.
There’s another piece of history for you. If you ever looked down at your keyboard, you might have heard that there was a scientific study on the best and most productive keyboard layout so that we could type the best. Some experts came up with it. This keyboard layout is as it was so that the manual typewriters would not get jammed as frequently based on common words. It was because of the manual typewriter design. That’s the only reason that our QWERTY keyboard is as it is now. Now that we don’t have manual typewriters anymore, there’s no reason for it. It sticks.
Other than we’re programmed that way now.
To your question about where things are going, people push the boundaries and use it. In Math class back in the day, it was like, “Don’t use a calculator.” Now, to the kids in Math, the teachers say, “Use the calculator.” They realize if you don’t know this type of Algebra, the calculator isn’t going to help you. It’s a tool and people are going to use it. We have to realize that skills are going to be lost. There’s no getting around it. Back in the day, there were TV dinners. Now, there are massive amounts of prepared food.
How many people do not know how to bake bread? That used to be a skill that every human on the planet knew how to do. Now, a minority of people, at least here in the country, know how to bake bread. What about handwriting and penmanship in the age of tapping and typing? I had to put my children in a specific class so that they would learn cursive handwriting because it’s no longer taught in so many schools. Penmanship in general is not taught in any formal way.
Will we lose the skill to draft entire paragraphs because we go to outlining and bullet pointing, and then let the computer fill in the rest? It will probably happen. It’s anybody’s guess but every time technology comes around, skills are lost. We lament, “If this machine ever broke, who would ever know how to do it?” For example, if all the computers broke, would anyone know how to create a silicone chip again from scratch? Does anyone know how to build for themselves an internal combustion engine? No. We get it from a handful of companies.
You raise a good point because we look at what we have lost in technology like cursive writing. I love journaling. I’m a journaler, and I journal in cursive. That’s how we learned to write. There is an interesting brain science. A little detour on this is the power of what happens when you write and journal with cursive versus printing out the letters in a printed format. Your brain connects and flows better. It’s interesting. I also saw a hilarious poster. They held up a pen and went, “Printer and delete button,” as they pointed from one end of the pencil to the other end, “Original technology.”
We look at the risks involved in this. We have talked about educational risk but how are we starting to deploy it? This word is truly an asset. We may forget metaphorically how to make bread and we’re going to forget some other skills in this effort as we move through and forward into this. What are some of the things you anticipate we’re going to benefit as a society in commerce or whatever else from ChatGPT and the development of AI?
I am looking forward to furthering the decentralization of control and information. We have been on this trend. Technology has been a hugely empowering force for so many other people that would not have been able to either create something or get a message out or any type of work. I’m excited that this furthers the decentralization of our society so much more. Go back several decades. If you wanted to publish something, you had to have a connection with somebody with a printing press.
Even in the digital age, you still had massive media companies where you had to convince one of them to run something because they controlled the distribution, whereas now we have seen with the various social media that you can publish something. It can be ignored and most things are ignored but there’s no small number of dozen people that decide whether something is going to be looked at or not looked at. That’s the concept of going viral.
That’s where the massive pushback has happened over the last couple of years because it started with politics, and then it went with vaccine things where social media companies did try to put in place some recentralizing censorship. The populace fought back, rejected it, and called it out. Elon Musk took over Twitter. It’s killing the company. We rolled back that initiative because once the people have tasted more freedom, they rebel against going back to the previous control.
That’s good. You’re seeing this contribute to the decentralization of data, knowledge, repositories, and all this. It’s going to redistribute it, decentralize it, and put it out in the hands of more people but it’s still accessing centralized databases. Is it a veneer or an illusion that we’re decentralizing because we’re still looking at centralized databases? How would you respond to that?
It depends on which if we’re specifically talking about GPTs. It’s whose implementation you’re using because whoever’s implementation you’re using is the database that you’re using. The research experimentation, the one that’s put up by OpenAI, is a database. Google has launched a closed preview of their Bard GPT-style AI. You have to know whose database it is and what you can trust it for. This is my prediction that more companies are going to train their model. That’s choosing the data that you put in and choosing how to label the data that you put in.
It sounds like an arduous process to be able to put that in. It has to be huge. Is this a game-changer only for the biggest companies?
Companies like ServiceNow, Freshdesk, Zoho Desk, and all the help desk ticketing companies have integrated knowledge bases. Imagine this. You’ve got Microsoft. Microsoft, Google, and those types of companies create infrastructure-level technology. It’s not specific enough but it’s general technology. Microsoft is good at creating development tools on top of its technologies.
I predict all of these help desk companies, as one example, extending their knowledge base searches, ticketing, chatbots, and whatnot so that the information that’s put into a company’s knowledge base can be labeled in a certain way and then fed into a specific data model that uses the GPT technology. It is so that a particular company’s chatbot or even the agents that are using the search or the self-service search can conversationally talk to the knowledge base. Tell it what your problem is and it will intelligently, with your specific data, tell you how to fix your washer and dryer or whatever it happens to be.
If they ask something specifically, “It’s not on. It’s not plugged in,” or it is plugged in but it’s still not on, then Microsoft’s database under the covers comes in because now we’re in more general data as in, “Check your breaker. Did you have a lightning strike? Are there any other lights off in your house?” This is beyond fixing a washer and dryer. It’s not in our knowledge base but it goes to the more general thing. I expect it’s going to happen because it’s too obvious of a use case.
How is GPT different from what we have previously called AI? This is new. How is it different?
Everything new is called AI. Once we make a concrete application for it, we call it something different. Machine learning has had a good decade. From 2005 to 2010, it was talked about, not that it’s new but as far as building up and creating things based on creating these models. One of the applications that are very common now is sentiment detection where you can feed in some texts, and it will guess the emotion, whether you’re happy, sad, angry, or any emotional scale, which is important for customer service departments. I’ve been wondering why social media companies or even blog engines haven’t integrated this. Marketers, wouldn’t you want to know the tone and the general emotion rating of articles, tweets, or LinkedIn posts that you’re putting out to make as a quality control?
Everything new is called AI. Once we make a concrete application for it, we call it something different. Share on XThat’s a great point.
There are so many companies that are generating HD movies with AI models. They’re generating images. They are doing smart video transcriptions. There are so many applications now. It’s all an aspect of AI. In other words, it is a feature of human intelligence that someone has had to describe very precisely, and then they build a machine to do it. While it’s a pipe dream, we call it AI because we’re not quite sure how to better describe it and it requires some intelligence to do this. Once we have described it, now we have so many other terms to choose from. We often choose a more specific term.
One of the big concerns people have, especially those that are in a function where they’re providing information, is that this is going to eliminate jobs. What is the potential of AI doing that?
It will but every single technology has eliminated some jobs.
You and I were on a webinar with Finastra doing this. You or one of the other panelists brought up how we have now gone to robotics. Guys and gals would put on bumpers on a car. Now we have a robot that does all of that process automatically. That means that job went away. That specific job went away but they repurposed that hopefully into other things. Are we going to have a major shift away from the jobs that are being done by humans? How adaptive do humans have to be as we start looking at the world that’s before us?
There’s probably going to be a trend or an early wave. Let me give an example. If I wanted to get a job as a marketing content writer, I had gone to all kinds of websites that have all kinds of information about this and that. I read the article and they’re not too compelling if I’m being honest but there are a lot of content writers out there. The current technology is good. You can tell it, “Write me a 500-word article about this.” Give it a few bullet points, and it jams out the whole thing.
If I wanted to get a job doing that and then I use this tool to generate tons of copies, I can do it but that’s not quality work. That’s quantity. There’s going to be a huge wave of people who don’t understand the value of quality. They use it to generate a whole heaping pile of quantity. Google and Microsoft with their Bing search engine are going to have to figure out how to wade between an internet that is a certain size and an internet that quickly becomes 100 times the size by generated content.
The job of these search engines is to index all of this content to figure out what to return in the search. It’s meta content. It’s content that’s generated based on other content. It’s low quality. If all you need is to look up specific information, that’s fine but what we like to read magazine articles about is not information lookup. We want unique insights.
It’s the critical thought that goes into being a good writer. I’m thinking of so many of my favorite writers, the words they use, and how they phrase them. If they’re a good writer, you can see how they phrased it and brought that data gathered but at the end of the day, I’m wanting to know what he’s thinking and how he’s thinking about something so that I can benefit from it. At least at this level of ChatGPT AI, we’re probably not going to get that. That’s where that individual is still going to be hugely necessary. If they’re turning out content like a factory, turning the lever, and spinning out articles, then that’s a lower level.
I will give you an example. At Clear Measure, for every engineer or architect that we hire, we have an audition step in the interviewing process. If you’re going to hire a master juggler or an opera singer, you’re going to say, “Juggle for me. Sing for me.” You’re not going to talk on the other side of the desk about doing it. You’re going to say, “Show me. Let me watch you do what you’re good at.” We have an audition step. Every good interview process should have the show-me step.
It’s not a hard programming problem. Anybody qualified to work at Clear Measure is going to sail through it because we only have 30 minutes anyway. Already, we have had a good number and a growing number of people. We tell them, “Use whatever tool. Do what you normally do. It doesn’t matter. Do it. We’re going to observe you as you do it.”
Google searches have always been common in programming. We have had a growing number that has gone to ChatGPT and had it generate some of the code. They copy and paste the code. There has been a growing number of people who do that. It doesn’t change the interview process one bit. It doesn’t hamper, hurt, or make it any more difficult for us to hone in on our assessment of the skills of the candidate and make the same assessments of who to continue down the process or not. It’s a change but it’s just a change.
That’s interesting. It’s going to eliminate some jobs but create jobs in another area. It’s a shifting of the workforce.
I do think about that. Every other big and important technology is a huge shift. Some people are going to have things that are supplanted, and then ten years later, we’re going to bring them back. Putting a PC on everybody’s desk or a word processor on every manager’s desk costs a lot of CFOs in the ‘80s to say, “We have an army of secretaries that are costing our company a lot of money but all they’re doing is dictating documents. We’re going to give these computers. The managers can type their memos. We don’t need these secretaries. That’s going to save us a lot of money. Let’s sweep away all these secretaries.”
Every other big and important technology is a huge shift. Share on XI know this example because my mother-in-law has a Secretarial Science Bachelor degree. I’ve learned this from her. We’re several decades removed from the secretary elimination wave. Executive assistants are coming back. A new generation of managers is learning again the importance of having an administrative-style helper who is an expert in administration.
In programming, so many software systems are important for describing information, cataloging information, retrieving information, and making reports of information. That stuff was taught at universities in Secretarial Science degree classes. That whole bit of knowledge has gone away because the importance of it was reduced, saying, “We got computers. We don’t need all these people now.” A whole new generation is having to relearn what it means to properly classify information, communicate it, and steward it. Secretarial Science is a real thing.
It’s fascinating. What recommendations do you have for how we can use the latest round of AI in innovation?
I’m excited about it because it’s super empowering to the people that we serve, which are software engineers and teams that build software. This is a software program. Microsoft is deploying it aggressively. Google is deploying it aggressively. They’re going to be running it inside their data centers and making it available for customers. We will be able to write programs and build software that calls into it, make use of it in an already-built fashion, and get it to generate content for us in various fashions. We will be able to create our models with specialized data that our companies currently have.
There have been reports of a computer with 8 to 10 GPUs being able to build a model in maybe a few weeks. That’s not a lot of computing horsepower. We’re far away from the supercomputer. You have to have several rooms of computers to crunch this data. It’s very affordable. We will be able to combine other forms of automation that used to be called AI to access Microsoft’s wealth of catalog information in this generative pre-trained transformer technology and combine it with other types of automations to create new inventive products across a whole wide range of industries. I’m thinking of the wealth of past information combined with processing real-time data that are relevant for a particular company. Comparing the two allows us to create better decisions on the fly and be quicker in response.
This is one of those topics I could go on and on about. It’s fascinating to listen to it. What can we talk about that will draw people to you and want to pick up the phone? What question can I ask that will invoke a response from you that will want to have people call you? What’s the best way?
We have already gotten some of the questions, “Is my software team going to become obsolete?”
That’s good. Let’s go there. We will see if there’s anything else after that. What we’re talking about is some jobs are going to be eliminated. One of the questions a lot of people are looking at is, “Is my job at risk?” You’re a program company. You develop code for companies. You already talked about how it can write code. Is this going to wipe out all the jobs for those that are writing code?
Every new technology is going to eliminate some types of jobs. We have robotic arms that can put bumpers on a car now. Fewer car manufacturers need people to do every single step of that process but you still need people to do some steps. It’s interesting that this new AI tool is software in and of itself, and it has to be integrated. This is going to create yet another type of programmer specialty or yet another task for software engineers to do.
It’s going to expand the number of programming jobs, not decrease it. It can write code but it’s not writing code. It’s taking code repositories that are known out there and piecing parts together to give code samples. It can do some things that look pretty darn interesting but I have not seen it build a software system that a person would have done.
AI can do some things that look pretty interesting, but it can’t build a software system that a person would have done. Share on XWe have always done internet searches, “How do I write this particular code? Let me look it up. Let me use it. Now I need to get down the road,” but that’s using it as a tool. I predict it’s going to be a tool for programmers, not a replacement for programmers. I expect its capabilities and the new types of software systems that it enables will further increase the need and the desire for programmers as more companies say, “Light bulb, now I can do this in my marketplace for a better competitive advantage. Let me go and build another software product.” In an industry that already has a shortage of programmers, the demand continues getting higher and our supply continues to struggle to keep up.
That’s interesting. How can people get ahold of you, Jeffrey?
ClearMeasure.com is the website. I’m the Chief Architect there and the Original Founder. You can also give me a call at (512) 298-2377. My email address is Jeffrey@ClearMeasure.com. I love talking to people. Worst-case scenario, we have some free advice and maybe we help you along the way. Best-case scenario, one of our products or services is a fit for you. If you run a .NET software team in-house, we would love to help you make them better.
That’s good. It’s so important to get outside advice. I’m a consultant. It’s self-serving to say that I’m outside of companies. Anyone who has employed you, me, and our services always realize, “We weren’t seeing some of these things, and we would not have seen them had we hadn’t brought you in from the outside.” We help identify those blind spots. I want to say thank you so much for spending a few minutes of your day joining me here on the show.
It’s my pleasure. Thanks so much.
Important Links
- Clear Measure
- Visual Studio
- ChatGPT
- OpenAI
- Bard
- ServiceNow
- Freshdesk
- Zoho Desk
- Finastra
- Jeffrey@ClearMeasure.com
- https://www.LinkedIn.com/in/Palermo/
About Jeffrey Palermo
Jeffrey Palermo is the Chief Architect of Clear Measure, a software architecture company that empowers our client’s development teams to be self-sufficient: moving fast, delivering quality, and running their systems with confidence.
Jeffrey has been recognized as a Microsoft MVP since 2006 and has spoken at national conferences such as Microsoft Ignite, TechEd, VS Live, and DevTeach. He has founded and run several software user groups and is the author of several print books, video books, and many articles.
A Christian, graduate of Texas A&M University (BA), and the Jack Welch Management Institute (MBA), an Eagle Scout, and an Iraq war veteran, Jeffrey likes to spend time with his family of five camping and riding dirt bikes.