Crash Course on Artificial Intelligence and Machine Learning
The terms artificial intelligence (AI) and machine learning (ML) are sometimes used interchangeably due to the recent buzzword phenomenon surrounding these new analytical approaches. Due to this buzzword take over, peoples’ understanding and use of these terms are all over the map. AI can mean anything from a computer doing something smart based on data to researchers trying to replicate the human thought process. Machine learning is a subfield of AI and has become an integral part of the modern understanding of AI, but not all AI is classified as ML. In this article, we will discuss AI and ML as a whole and the opportunities AI presents; as well as some of the challenges modern companies are facing when thinking about solutions AI can provide to their business models.
What is artificial intelligence? What is machine learning? How are they connected?
Artificial intelligence is simply intelligence demonstrated by machines.
Machine learning is a subfield of AI that means, in an effort to perform some goal or task, teaching a computer or a machine a set of rules to achieve that goal so a human doesn’t have to.
Over the last couple of years, artificial intelligence has seen a jump in the effectiveness of advanced machine learning techniques. A couple of factors for this jump include but are not limited to, high-performance computing, investing, the development of distributed methods, the availability of large labeled datasets and market competition. AI is making its way from the depths of academic advances to large companies and startups alike, all hoping to gain an edge over the market.
AI and ML will usually beat a human within certain confines and contexts of a certain game or task (see examples in the “Evolution of AI” section below). However, when it comes to taking in new information and rationalizing or using critical thinking, a human will win. Artificial intelligence doesn’t need to be binary in the sense of with humans or without; AI is just a tool, it will be up to humans how best to use it.
Subfields of AI:
- Natural Language Generation: Producing text from computer data.
- Speech Recognition: Transcribe and transform human speech into a format useful for computer applications.
- Virtual Agents: From simple chatbots to advanced systems that can network with humans.
- Machine Learning Platforms: Providing algorithms, APIs, development and training toolkits, data, as well as computing power to design, train, and deploy models into applications, processes, and other machines.
- AI-optimized Hardware: Graphics processing units (GPU) and appliances specifically designed and architected to efficiently run AI-oriented computational jobs.
- Decision Management: Engines that insert rules and logic into AI systems and used for initial setup/training and ongoing maintenance and tuning.
- Deep Learning Platforms: A particular type of machine learning consisting of artificial neural networks (see next section) with multiple abstraction layers.
- Biometrics: Enable more natural interactions between humans and machines, including but not limited to image and touch recognition, speech, and body language.
- Robotic Process Automation: Using scripts and other methods to automate human action to support efficient business processes.
- Text Analytics and NLP: Natural language processing (NLP) uses and supports text analytics by facilitating the understanding of sentence structure and meaning, sentiment, and intent through statistical and machine learning methods.
Neural networks are a set of algorithms, modeled loosely after the human brain, that is designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text or time series, must be translated.
Neural networks help us cluster and classify. You can think of them as a clustering and classification layer on top of the data you store and manage. They help to group unlabeled data according to similarities among the example inputs and can be used to classify data when they have a labeled dataset on which the model can be trained.
Before deep learning, neural networks often had only three or five layers and dozens of neurons. Now with deep learning, networks can have seven to ten or more layers, with simulated neurons numbering into the millions.
Evolution of AI
During WWII, scientists and scholars from varying fields of study worked together to advance the use of intelligent machines. It was during this time that Alan Turing created the Turing Test, the first computer that could fool someone into thinking they were speaking with a human. The imaginations of science fiction authors and filmmakers were peaked in the 1950s and, as the genre gained popularity, so did the interest in the fields of robotics and science.
The term ‘artificial intelligence’ was officially coined in 1956 at a Dartmouth Conference. During this conference, scientists discussed the differing views around the two approaches that can be used to simulate cognition and problem-solving. These two approaches are called “top-down” and “bottom-up”. Top-down starts high-level by taking a pre-programmed knowledge base (a large collection of pre-defined knowledge) and performs calculations pulling from the interconnected fields of information available. This simulates how the human brain works when taking a “big picture” approach to problem-solving, starting with the large question at hand and then pulling from all it has learned and experienced to reach a holistic answer. Those on the opposite side supporting the bottom-up AI, are more interested in investigating the aspects of cognition that can be recreated by building networks that simulate the way neurons work in the human brain. The bottom-up approach starts with a simple concept and is taught to build its own knowledge base and to derive its own conclusions as it expands beyond the point of origin.
After many investments and research into AI during the 1960s with nothing really to show for it, the field stalled and entered into an AI winter during the 1970s. In 1990, the bottom-up approach was revived and established as the better way to work with AI with the technology available at the time.
In 1996 and 1997, Garry Kasparov played two six-game chess matches against IBM’s supercomputer named Deep Blue. Kasparov won the first match between the two but lost the second match. The second match was a milestone for AI, proving that one of the world’s greatest chess minds could be defeated by a machine. Although an older example compared to Google’s AlphaGo winning, the Kasparov vs Deep Blue was a combination of research, funding and computational power. This achievement revitalized the possibility and vision of past expectations surrounding AI.
The early 2000s is when AI started to see real progress on robotics. The Roomba vacuum cleaner was introduced as the first home robot and war machines were created to help take the place of human soldiers in disposing of bombs. Building upon the advances of neural networks in the late 2000s, personal assistant apps using speech recognition became more than 80% accurate and a group of 20 robots was able to learn a dance routine and dance in harmony for eight minutes. Though, AI really had its moment in the spotlight when IBM’s Watson defeated its human competitors on Jeopardy.
The boundaries of what AI can achieve are constantly being pushed and will be interesting to watch unfold.
Examples of AI and ML Today
Speech recognition is a very popular subfield of AI. One example of how it is being used today is within Customer Service Management. The dreaded robot customer service phone lines usually aren’t the most pleasant experience, leaving the customer repeating the word or phrase over and over. With AI, speech recognition allows for a more efficient transition from each step, learning through the conversation to ultimately provide the customer with a more helpful and personalized experience. For instance, deep learning analysis of a call allows systems to assess customers’ emotional tone; if a customer is responding negatively to an automated system, the call can be rerouted to human operators and managers.
Image recognition, another subfield of AI, can be seen in use today in Healthcare. The primary aim of health-related AI as of 2019 is to analyze relationships between prevention or treatment techniques and patient outcomes, as well as medical image recognition. AI imaging tools can be used to screen medical images faster and with comparable accuracy to human doctors. If deployed in a mobile app, this AI technology could have widespread benefits in underserved low-resource areas of the world, reducing the need for a trained diagnostic Radiologist on-site.
Another example of AI helping society is within the Financial sector around fraud detection. The hyper-connectivity of the world we live in means the payments industry is working closely with the security industry to lower fraud. The ability for AI to harness the massive amounts of fraud data to become a useful tool for analysts and systems within these organizations to cohesively combat fraud is increasing every day. This is hugely beneficial and as simple as your trusted bank sending you a notification of a suspicious purchase, allowing you to respond “yes that’s me” or “no that’s fraud”.
Adopting AI and ML in Business
Disruption from AI is here, but many company leaders aren’t sure what to expect from AI or how it fits into their business model. With digital transformation coming at breakneck speed the time to formulate a strategy is now. A study by Cowen and Company shows that 81% of IT leaders are currently investing in or planning to invest in AI. 43% of those leaders are evaluating and doing a POC (proof of concept), and 38% are already live and planning to invest more.
If you have good data and you can teach the machine to learn, the observations will be larger than any human can compute. Data engineering teams usually focus on data infrastructure and pipelines, while the data science team usually focuses on asking the right questions that will produce insights for decision making. Each company has different data challenges. The core concepts a data engineer will most frequently encounter are data locality, consistency, caching, hash tables, and tree-based indexes.
“Too often people look to data scientists and ignore the fact that, in order to be successful in data science, you need to have an effective data platform. We actually see the biggest skill gap is in high-quality data engineers who can build these new data applications and organize the data.”
– Ron Bodkin, CEO of Think Big Analytics
AI teams are different from data engineering teams and data science teams. AI teams usually focus on building, optimizing, and scaling deep learning algorithms that emulate human abilities such as vision, speech, language, decision making, and other complex tasks.
Capturing the potential impact of these techniques requires solving multiple problems. One such problem is the use of data, which must always take into account concerns including data security, privacy and potential issues such as human bias. Technical limitations include the need for a large volume and variety of often mislabeled training data, although continued advances are already helping address these.
The effective application of AI also requires organizations to address other key data challenges to define operational and technological processes, including:
- Establishing effective data governance
- Defining ontologies (a set of concepts and categories in a subject area or domain that shows their properties and the relations between them)
- Data engineering to connect and align the various pipelines (“pipes”) from the available data sources
- Managing models over time
- Building the data pipes from AI insights to either human or machine actions
- Managing regulatory constraints
At the point of this writing, most machine learning systems don’t uncover causal mechanisms but are excellent statistical correlation engines. They can’t explain why they think some patients are more likely to die, because they don’t “think”…they only answer. This is where human interaction is needed to leverage critical thinking and ask the right questions to determine the causal relationships between correlated variables. This is where potential information becomes true knowledge and insights.
A world of information without understanding, which as of now requires human interpretation, becomes a world without discernible cause and effect. In this case, we grow dependent on our digital concierges to tell us what to do and when thus avoiding thinking and planning in the process. Companies deploying AI will need to think through their safe and responsible data use and the business models they employ that make use of the user or customer data. Even more challenging, in terms of scale, is overcoming the “last mile” problem of making sure the superior insights provided by AI are instantiated in the behavior of the people and processes of an enterprise.
On the technical side, the mapping of problem types and techniques to sectors and functions of potential value can guide a company with specific areas of expertise as to where to focus. Companies will need to consider efforts on the “first mile”, that is how to acquire and organize data and efforts, as well as on the “last mile” or how to integrate the output of AI models into workflows, ranging from clinical trial managers and sales force managers to procurement officers.
Organizations sometimes use “answers first, explanations later” approach to discovery, thus accruing what is known as intellectual debt. It’s possible to discover what works and then to put that insight to use immediately without knowing why it works. This approach assumes that the underlying mechanism will be figured out later. In some cases, organizations pay off this intellectual debt quickly, maximizing the effectiveness of the information while diligently following up with the accompanying explanation. However, when left without an explanation this intellectual debt will compound. In these cases, significant time will pass (sometimes decades) without a true understanding of why something was done, usually resulting in significant problems down the road when the intellectual debt must be repaid in order to further innovate or update the initial insight.
AI takes the answer first, look for explanations later approach to solving problems, putting the impetus on the organizations to be diligent in repaying the intellectual debt. Information that can’t be fully defined and measured can cause managers and business leaders to question the value of the answers provided by an algorithm or AI layer, thus impacting the viability and likely the supporting budget of what could otherwise be a successful AI initiative.
Other business-specific problems that often arise when using AI are:
- Lack of clear AI use cases
- Lack of skills to implement
- Lack of data
- Lack of the right processes or governance
- Lack of modern data management platform
This brings us to talking about what is needed to effectively transform your company so that you are primed to take advantage of AI technology.
Building a data culture is the key
According to the 2019 Big Data and AI Executive Survey conducted by NewVantage Partners, 31% of the companies reported having a “data-driven organization” and 28% a “data culture.” Why do those numbers sound low? Perhaps because 77.1% of companies report that cultural challenges remain the greatest obstacle to their business adopting Big Data and AI initiatives. Executives cited that 95% of the factors stem from people and process and only 5% related to technology.
Building a data culture begins with executive buy-in. The data culture starts at the top and works its way throughout the company; whether it’s knocking down silos of information, deciding what legacy data to keep or understanding the data collection processes that are currently in place. Clearly defining the business model can point towards the right questions to ask and focusing on the data quality is another major key to providing the building blocks to unlock analytical insights.
Not all data is created equal and not all data collected needs to be used or kept. Correctly implementing these building blocks will allow companies to use future techniques and technologies to their advantage. Each piece of the data culture needs to be in place for companies to sustainably succeed with such an initiative. For instance, you can’t just nail down collecting the right data and think you’re finished since the data alone has little to do with the analytical software or programming languages used to analyze the data (amongst other considerations).
Companies without effective data cultures face financial losses, reputational damage, inferior decision making, and missed opportunities. With the increasingly competitive landscape and constantly evolving technologies available, businesses operating in a digitized economy can’t afford to miss potential opportunities when they arise.
The AI Revolution and Information Age are here. Forward-thinking companies have significant opportunities to take advantage of what was an unfathomable amount of data and insights even a few years ago. This is a completely new phenomenon whose ripples are causing a paradigm shift in decision science, management and technology across every industry and economy.
So why not use these AI tools to help us comb through these massive amounts of data beyond anything even a large team of humans could do? We can then spend our time doing the one thing the machine cannot emulate, and the thing to most defines us as a species: critical thinking.
We have only begun to scratch the surface of the insights available via the collective pool of data that continues to grow at an exponential rate. Organizations that build a data culture and focus on data-driven strategic decision making and ongoing improvement to their technology and processes are poised for a bright future, while those who ignore it are unfortunately playing a risky game that will likely end in being overtaken by those who effectively leverage its power.
http://www.alanturing.net/turing_archive/pages/Reference Articles/what_is_AI/What is AI09.html