Podcast 002 Crash Course in AI and ML Part 1

In part one, we’ll discuss the difference between artificial intelligence and machine learning and how the field came to be.

Don’t forget to subscribe to the show!

Episode 002 Crash Course on AI and ML Part 1 Transcript

Heather McKee:              Welcome to the Modern Polymath. Where we discuss topics in technology, economics, marketing, organizational behavior, market research, human resources, psychology, algorithms, higher education, cybersecurity.

Heather McKee:              Hey podcast universe, thanks for tuning in. On today’s episode of The Modern Polymath, we’re going to give you part one of two in a crash course on AI and ML, a.k.a. artificial intelligence and machine learning. Over these two parts, we’ll cover what AI and ML are, why they’re important to understand especially for business leaders, and hopefully provide some examples to help demystify a very complex topic.

Heather McKee:              For part one, we’ll start with the definition of AI, how it all came to be, and break down a few sub-fields like machine learning. Today, with us, we have Dr. John Christiansen, John-David McKee, Will Calloway, and I am Heather McKee. Let’s get this crash course started.

John-David M.:                So to define artificial intelligence, it’s pretty simple if you go at the highest level because we’re talking about a family of topics here. This is the broad category.

Heather McKee:              John-David McKee speaking there.

John-David M.:                Artificial intelligence is simply intelligence demonstrated by machines, which is in contrast to natural intelligence, which is what we as humans display.

Will Callaway:                  Right, and there are tons of different definitions. Everyone’s pretty much conveying the same message and definition of artificial intelligence, just using different words.

Heather McKee:              That’s Will Callaway.

Will Callaway:                  And it’s a 5,000-foot view because the category is so broad and it’s changing every six months, right? What artificial intelligence is yesterday is not what it will be in six months, a year, two years moving forward.

Will Callaway:                  AI is not artificial. It’s still designed by humans. It’s intelligent design. And there are still curated processes that need to be done before the AI can be optimized to perform a certain task.

Will Callaway:                  AI and ML will always be the human within a certain context or a certain rule set playing a certain game. But focusing on the real world, taking on new information, rationalizing, or critically thinking about a new subject, the human will perform best with a little assistance. AI will never be able to outperform it because it can’t rationalize. It can’t understand biases yet. It doesn’t have any domain knowledge, other than what the human has taught it. That’s where the human element comes in. It’s more of a teamwork situation and humans using it as a tool, rather than it being the singularity and the last invention that ever needs to be done.

John-David M.:                Well, it’s also why you can’t just buy an AI tool and think it’s going to solve your problems. It takes someone to understand it, to make it work and…

Dr. Jon C.:                         And to know and to know whether or not it’s actually working.

John-David M.:                Yeah. And not all analysts are created equal, right? And it’s not as something that just a machine can do.

Heather McKee:              Yeah. And we’ll talk about more on that, about basically how people can use it as a tool and not have to be afraid of it. But first, let’s go back and talk about how we got started with AI. Where did it even all begin?

Will Callaway:                  So it first kind of started off during World War II with Alan Turing where he kind of created the first computer to solve and try and crack the Enigma code and then that moved in. There were a couple of other kind of things after that, kind of some sci-fi movies or whatever, pushing that agenda. And then it moved to a top-down approach kind of, I think it was like the 1950s.

Heather McKee:              Can you tell me real quick just what is a top-down approach? What do you mean by that?

Will Callaway:                  So the top-down approach is basically someone who’s creating so many rules that it’s basically just a rule-based program that if it comes to a situation where there’s not a rule for, it won’t be able to compute, it’ll just snag.

Will Callaway:                  So then the late 1960s, there was a lot of funding that was thrown into it. There were some large promises made and visions that were kind of curated. And people thought, “Wow, this is really going to be something.” None of that came to fruition. It kind of stalled out. It couldn’t do the things that they thought it would be able to do. In like the 1970s, it kind of hit a dry spell, where it was like an AI winter and people kind of gave up on it. And it was not a forgotten technology but just something that people thought a lot of time was being wasted on.

Will Callaway:                  In 1990s, the bottom-up approach was kind of established as maybe a better way of approaching solving kind of the AI problem. And it deals a lot with neural networks, which we will focus on and dedicate to its own episode.

Dr. Jon C.:                         Some of the more popular examples though.

Heather McKee:              Dr. John Christiansen speaking there.

Dr. Jon C.:                         That really kind of put AI on the map, going all the way back to the chess master who essentially couldn’t be beaten. A Russian guy by the name of Garry Kasparov on the 11th of May, 1997 played against a computer and lost. What AI actually did was study a sequence of moves that gave it the greatest chance of winning. And because it can do that many calculations that fast, it can look at what Kasparov’s move was and assess all the probable next best moves to make that move. That was its entire goal. So, the entire anchor around AI and then when we get into machine learning is identifying your target and then learning, looking for ways to make it work and achieve the goal or the target and finding the most optimal way of which to do that.

Dr. Jon C.:                         And where it gets really interesting is where the learning comes in. When you figure that a computer can beat one of the greatest chess minds of all time, you’ve got to think, “Okay, so much has to do with the power of those calculations and those computations.” Watson winning Jeopardy all those periods of time really put AI on the map.

Heather McKee:              Then really, the first kind of home robot that we had was actually the Roomba. And I think that that even just goes so far to show is kind of how far AI has come or has not come to things taking over the world because, “Oh, all it can do right now is vacuum up our dirt and food, so.”

Dr. Jon C.:                         And not even all that well from what I gather.

Heather McKee:              Yeah, exactly. So, I don’t think that’s too much that we have to worry about. But that’s a great example of showing how AI is actually helping the world or how it can help. And there are other examples later on in the 2000s, or the early 2000s, where companies like Boston Dynamics were working on war machines and actually creating this autonomous robot that was like a dog that would be able to go out into the field and essentially either carry human soldiers or take the place of them. And yes, some people can see the fear in that, where maybe these machines go out there and start shooting at the wrong side. But then you’ve also got some of those robots who are going out there and helping to set off IEDs instead of our soldiers going out there and finding them and setting them off and it taking their life instead.

Heather McKee:              So again, just another example of the pros and cons but really now where AI has gotten to in terms of other everyday uses is really in speech recognition technology. How we’re all using Siri or Google assistants or whatnot and the accuracy of being able to give us results or actually answer our tasks that we’re asking for these devices to do for us. It’s becoming more and more accurate. But that’s the improvement of AI so far is the accuracy. Not that now it can read our minds or something. It’s just getting more accurate as to what we’re putting into it.

Will Callaway:                  That’s a great point because it shows that a lot of people are fearful of what it could be or what it will be one day. And the reality, we’re getting things like Roombas and simple robots who can just open doors. Most five-year-olds are able to do some of those things. And a lot of the progression over the past, let’s say 15 years is because the AI winter’s over. Investing’s up, tons of AI. It’s the new buzz word to put in your pitch deck to try and get funding is, “We have some layer of AI.” “Well, what layer is that?” “Oh, it’s proprietary. Invest before we’ll tell you.” Things like that. And high-performance computing, cloud-based computing, they’re the tools to build AI. The things that people are thinking about now that they could potentially do with AI and the massive amounts of data is more attainable. All these resources allow people now to tackle larger problems with AI than they were able to do in 1975 with the limited resources at hand.

John-David M.:                The amount is crazy. I mean, Google processes more than 40,000 searches every second. Process that for a minute.

Heather McKee:              Right. And so, I mean, some of the ways that people are using AI now that it is more attainable is really being seen in healthcare a lot. People are able to pull data that they are getting from patients such as different medical images, things like that. And actually trying to now create different networks that the computer searches through looking for patterns to uncover any kind of maybe a cancer detection or whatever it is that they’re actually studying in the medical images. But not even that, even the wearables that people are wearing now, like the Apple Watch. It’s calculating your heart rate and now it can let you know whenever it’s elevated above normal. So that if something is going on, you can get it checked out. I think I’ve heard of at least three different success stories where people said that the Apple Watch saved their life. They didn’t even know they had something wrong with them until they looked down and their heart rate was going nuts.

Will Callaway:                  Which is a great transition because the Apple Watch is saving people’s lives. And now we’re going to talk about wine, which is great for your heart if you only drink a little bit at a time. And how UAVs and machine learning are helping out kind of the agro-industry where they’re throwing up a drone who’s taking really high-resolution pictures in a hyperspectral imaging situation. And then it has the ability for some machine-learning models to look at the different crops in their vineyard or different sections of their vineyard. What crops are producing the best grapes, where on your land is that happening? You can start studying it, studying the photosynthesis layers. And you can basically create better products with a lower overhead. That’s pretty much the goal of every business, right? How can we lower our costs and increase our profits?

John-David M.:                If we’re going to talk about examples of AI, you really dive into things like Pandora and the engine they come up with or the recommendations on Amazon. People see that all the time. There’s not like some group of people back there that are saying, “You would also like this.” That’s an algorithm running.

Dr. Jon C.:                         Yeah, it’s called apriori and it’s actually really not that complicated at all, actually. I mean, all it’s doing is looking for, if this then that. If you purchase this, then what’s the most probable thing you are likely to purchase together? Another concept is called the market basket analysis. So, what things are most likely to go in a basket together? It’s doing exactly that. Pandora works, not quite as similar, it’s more of a… It’s looking at your nearest neighbor. So, if you have a song that you like and you hit the thumbs up, it’s going to take the aspects of that song and there’s dozens of them, maybe even hundreds of them now that are looking for other songs that have very similar features about it. And will put it in front of you. If you like that too, it’s now putting it in the bucket of things you like and getting closer to what your preference is for whatever station that you’re creating.

John-David M.:                All right, so we’ve covered AI as a whole, as a broad category but it’s really important to understand that it’s exactly that. It’s just a broad category. And there are so many different sub-fields of AI. So a few examples of the sub-fields of AI are things we’re all familiar with like speech processing, a la of Siri or natural language processing, vision, image recognition, things like that. Planning decision-making tools, robotics in neural networks but what we really want to focus on today because it’s core to our world, but it’s also core to the world that we live in now, this AI-driven world is machine learning.

Dr. Jon C.:                         Machine learning is becoming really one of the more enticing areas of our world that is simplifying our world and making us much better at the things that we need to be spending more time on.

John-David M.:                Machine learning is really what’s being used almost across the board in terms of all the different industries that are out there. It’s something that’s been around for a long time. It’s well understood and it really has significant value that it can add in many different levels in many different organizations. Whereas some of the other fields are still in the early stages and aren’t ready for prime time.

Will Callaway:                  So, machine learning is older and is more well-understood than other aspects of AI. And that’s why certain businesses and companies, whether it’s FinTech, healthcare, whatever it may be, are starting to implement it because they can’t understand it. But there are still barriers with that because some of the hardcore AI and machine learning, there’s not much talent out there.

Dr. Jon C.:                         Basically, machine learning is, in an effort to perform some goal or task, you’re teaching a computer or a machine a set of rules in order to achieve that goal.

Heather McKee:              So once you understand the target and know the rules, how do you actually begin implementing AI and machine learning? What do you do?

Dr. Jon C.:                         I think people think they know the target but they don’t. So, before I even really get into that construction, before I even start asking where’s data or what do we have? Which parcels into that. People really don’t properly know a lot of the time. They haven’t really properly articulated the problem, need, issue, controversy, question, purpose, any of that stuff. And a lot of times, it’s hard to do. So, you just have to start, especially if it’s a really complex problem, you start asking simpler questions to see if they start chipping away at it. But you have to start with like, “What do we know? What do we not know and where’s our gap?” And then calibrate the problem, need, issue, controversy, question, what have you, and really identify what that is. So, after that it’s all about what data do we have, what can we get access to, and what does it look like?

Dr. Jon C.:                         And then once you start collecting it, harvesting it, blending it, what have you, let’s set… All right, so let’s take an example. It says all people in this room have read this novel at least 1,005 times collectively. When this thing ever comes out, in the first three chapters, what we’re going to learn is this main character. His entire goal is to use data and algorithms to solve really complex crimes. And what he tackles in this entire escapade of this three days after the fire is, “Let me get all the data I can about arson.” Because that’s the crime of interest we have here. So, when we got into this story, we actually said, “Okay, why do we make this up? Let’s actually get the data and run the model so that the story follows that track.”

Dr. Jon C.:                         So it’s like, “Okay, well let’s see what’s out there.” And so you’ve got now the FBI’s uniform crime report data that you can pull that has arson categorized. It has latitude, longitude, it has all the dates and the like, it has all the other crimes that are associated with it. You have victim-level stuff. You have perp-level stuff. Then you blend that with Department of Homeland Security and FEMA has data on literally all fires that are reported up to the state level, which is a lot. I mean, you’re talking millions and millions of records. So you do that over a couple years. There’s an entire file that’s dedicated to classifying arson in it and what the underlying motivations might’ve been and anything they know about that fire scene. So, not only is it telling us it’s classifying the arson versus all other types of fires, kitchen fires, what have you, but it’s telling me the areas of origin, your ignition points, what was used as a heat source. Was it a match, was it a lighter, was it electrical? What have you.

Dr. Jon C.:                         So, you’ve got all these things and what you’re trying to do. And what we did in the story was say, “Here’s what we know. These are seven fires that all happened in a spree on one night. All the homes were valued at a million-plus. They were all in different points of Chicago, which, most arson spree-type people usually do it in a very isolated area and a cluster. Whereas these were spread out. So, what do we know about that?” So then we’re able to take that real data and say, “Here’s our target. Our target is arson cases that fit this exact profile. Now let’s take everything we have about other incendiary devices and areas of origin and all these other things.” And then out of that model, what’s so interesting is in all this development was that question which is, “What are the variables that predict a fire of this profile?” Out of that’s like, “Oh, wow. So a home this big, multiple areas of origin. You’re going to set up in three or four different places and you’re going to create different fires in and of itself.”

Dr. Jon C.:                         You got incendiary devices used that are likely going to be sophisticated: electronic, chemical, what have you, and they’re probably delayed devices, which means the arsonist sets it up and gets out of there. Hence why the arrest record’s very low. If some of those things are non-observable, the other case is ignition point and area of origin is unknown. It’s because a house that big that’s demoed at that level, it’s completely destroyed. You can’t even start to really trace back where this fire started because the fire was everywhere.

Dr. Jon C.:                         So, that’s an applied example of how you take a question of that type, you’re able to create a dependent variable like one for all arson cases that fit this profile and zero for everything else. Show me what distinguishes these specific fire types from every other possibility so that I can… Those are the patterns that I can look for when I’m in the investigation and ah, no spoilers but obviously, the findings in there have a lot to do with where that path takes it.

Heather McKee:              Hm, interesting. Well, so I feel like we’ve talked a good bit about machine learning and some examples of kind of how that works behind the computer scenes, if you will, air quotes. But we’ve also mentioned some other examples that involve different subfields of AI, like natural language processing, speech processing, and some others. So, let’s just touch on those really quick.

Dr. Jon C.:                         Natural language processing, which soon we’re going to go directly into how machine learning and AI are integrating into HR recruitment, resume screening, and the like. But essentially, it’s the way of how speech recognition works. Right now we could be speaking into a speech recognition software program and it could be actually transcribing what we’re saying, speech processing. And right now, if we had a transcriber who would do its best to actually build a transcript of what I’m saying. So, someone somewhere created some massive engine and translated what is being said to actual text and can find the patterns that build that out.

John-David M.:                I would venture a guess to say that most people know that when they use Siri, they’re using AI or AI is backing it as a safe assumption to say that that’s the case. But how it actually works is the mystified part of it.

Dr. Jon C.:                         Well, we can give a really good… We can draw pretty strong inferences in that. Every time you use Siri, if you ask Siri to perform a task, which is exactly what you’re doing. “Hey Siri, do X, Y, Z.” And literally, that’s exactly what Siri does. It says, “Hey, Siri, do X, Y, Z.” So, Siri is A, processing my language and my speech.

John-David M.:                She’s literally processing your language.

Dr. Jon C.:                         That’s amazing.

Dr. Jon C.:                         Yeah. Okay. We need to probably turn that off. And in doing so, has some indicator of whether or not it was successful. Because if it generates a link and we click on it, then Siri knows that the function she performed was successful. And if not, likely we close out or something else. So, the more anybody uses Siri or the Amazon toy, we will not say that set off another AI toy in this house. The more you use it, the better it becomes at getting at whatever your task or your goal is. But it also likely is borrowing from the aggregate of everybody that uses it and building patterns around what makes if this then this. And we alluded to that earlier.

John-David M.:                But on that note, interestingly about Siri and Alexa as devices go off everywhere. So I guess, sorry listeners if you’re listening to this and you probably have a lot of stuff going on. We’ve got to be careful with that but it’s not a good way to get followers. Not only are these systems learning, if they’re doing a good job of matching your search query, whatever it might be, is learning your voice, how you speak. I mean, I tend to mumble. I know Siri has a hard time with that sometimes and I can tell but it’s getting better every time it is validated that it was the right thing. It’s learning that the words it’s interpreting are correct. If not, then that’s not the case. And being from the South, there’s a lot of strong accents where I’m sure Siri really struggles, being programmed in California, that has a hard time understanding some of the words that are coming out of our mouths down here.

John-David M.:                But it’s learning that, it’s learning the way that you talk, the way that you phrase things. Not everybody speaks with perfect English. It has to pick up on that and interpret that. It has to make a decision, a best-guess decision of what you’re trying to ask it to do.

John-David M.:                But a growing trend in SEO, search engine optimization, is that voice searches are becoming more and more prominent. So there’s a recognition behind that. Here’s where AI plays with AI. They’ve got to play nicely together, right? I mean, you’ve got voice recognition and language processing happening on the AI side whenever you are searching for something. But then the algorithms that are underlying Google or let’s be real, Google and does it really matter anything else? Sorry, Bing. Google as an algorithm is learning to process voice search. “Hey Siri, search for this.” And then, in marketers and people running websites and things like that have to learn to optimize their sites so that people can voice search for them. Because a voice search is going to have different characteristics, different phrasings and things like that than somebody typing something in. It’s just a different scenario. We talk differently than we type in a lot of cases. And you think about all the different nuances within these different types of systems all running AI, running together. It’s a complicated system.

Heather McKee:              For sure. Absolutely.

Dr. Jon C.:                         One of the ones that I think is kind of the fear-mongering one is facial recognition. And how it does that is it’s taking dozens to hundreds to millions of images of any given face and matching it to who we know the owner of said face is, right? So, if anybody has the new Apple phone or I think Samsungs and the like all do it too. When you do that whole thing where it’s scanning your face, it’s getting dozens, hundreds, thousands of points on your face to know what distinguishes yours from everyone else’s. So that it knows that when it sees that pattern, collectively, it knows that you are the owner of the phone and it unlocks.

Heather McKee:              Yeah. It really is crazy to think about how those tools we use on the regular are really just algorithms working together. But with that, we are out of time today and we’ll pick back up with part two in the next episode. Part two, we’ll discuss how organizations are using AI in their business and give some helpful tips and insights for business leaders exploring the idea of integrating AI into their own business.

John-David M.:                We need to understand how it can be used within organizations, the impact that it can have because it can create significant competitive advantages.

Heather McKee:              That’s next time. Don’t forget that you can find the in-depth content related to this episode and all topics we discuss on the podcast, on our website, insandouts.org. Catch you next time.

Leave a Reply

Your email address will not be published. Required fields are marked *