We are witnessing AI causing disruption all around us. Becoming AI-savvy opens the door for many opportunities. However, the rapid adoption of AI comes with a huge cost. AI usage and adoption increases power and energy usage. It increases the emission of Greenhouse gases from the huge data centers built to run AI applications. The stakes are getting higher. This mandates us not only to be AI-savvy or AI-first but also to be AI-responsible. In this article I discuss three approaches to become AI-responsible. I further emphasize why these three approaches are important in building sustainable AI systems.
AI adoption and usage is bound to increase. It is becoming inevitable that we cannot live without AI. Even elders in my home who are octogenarians talk about AI when they have not talked about computers or its applications in the past decades. However, we have to be aware that the same engine that is used to run automobiles and heavy vehicles is used now-a-days to power the data centers hosting the cloud infrastructure to run the AI applications that we use in our offices and home everyday. Hence if we are concerned about environmental safety and pollution when driving an automobile, we should be equally concerned when running an AI application. AI is becoming an ethical issue. AI is becoming an energy issue. Amazon is investing in nuclear power plants to power data centers and also to reduce carbon emissions from data centers. All these point us to become responsible users of Artificial Intelligence AI systems.
Here are three approaches that will encourage responsible-AI adoption.
- (1) Careful selection of AI use-cases
- (2) Understand well the basics and working of AI applications
- (3) Learn to use AI tools and technologies appropriately and efficiently
Careful selection of AI use-cases
While selecting use-cases for AI, the tangible measure in productivity should be justifiable for the cost and resources in building the AI applications. Thoughtful considerations on whether it is making good use of AI and AI tools will go a big way in using AI responsibly. In general, if a problem has multiple outcomes with each outcome having some probability of occurrence, it is then a potential use-case for AI. On the other hand, if the problem has a single outcome it doesn’t really require an AI and it can be programmed through a rule-based approach.
The use-cases for AI applications could involve handling text or numbers or images or videos or combination of one or more. Let us consider a use-case for a content creation or a research project. It can be done in the old ways that involves reading, collecting, assimilating, analyzing and producing the report. The Generative AI LLM models or Text Analytics AI models or few transfer-learning models come to use if it has to be done in the AI way, Careful consideration about data security will arise the need for either running the LLM through an API (if security is not an issue) or hosting the LLM models in a dedicated secure cloud (if security is an issue). Cost, resources and implementation efforts increase if opting for a dedicated secure cloud.
This brings the point to have a careful thought process in deciding whether it would be beneficial to implement the project using AI or not. This is not to mention the role of a domain-expert who has to go through the output and provide feedback to improve the model accuracy, to reduce overfitting or underfitting of models even after the model produces the output based on the best model scores. In a separate blog https://www.foxtail-research.org/automation-in-manufacturing-leads-to-high-wage-job-creation-at-all-skill-levels/, I highlighted the critical role domain experts play in AI projects in the manufacturing industry, I highlighted the critical role domain experts play in AI projects in the manufacturing industry.
Similarly, a use case involving numbers could be a direct statistical analysis project involving hypothesis testing and correlations. Or an intelligent pivoting of data and displaying the charts and data through a data visualization tool will satisfy the project objective.
To put it simply, it is a good practice to evaluate in the first place if the use-case is a good candidate for an AI project. An equally important point in considering AI use-cases will be about the value it will create for the business or customers. I read in the book ‘Good Economics for Hard Times’ by Abhijit V Banerjee and Esther Duflo where they recommend Insurance companies to build AI applications that provide post-hospitalization care over building applications for processing claims. They say the former will reduce costs for both the hospital and the patient, free the beds in the hospital and reduce patient wait times for hospital admissions. Adopting such practices will create a culture of using AI responsibly for the betterment of all. If I were to again execute my past projects, I will evaluate to decide if it is beneficial to have them done in the same ways as before or if it can be done in the AI-way.
Understand well the basics and working of AI applications
The heart of Artificial Intelligence systems lies in the human brain-based Neural Network model. AI models fit every piece of information from a dataset whether it is numbers or text or images or videos into a numerical template called matrices/vectors and processes them through a neural network model to make predictions and generate outputs. Or if the dataset is labeled, then the machines are made to learn the datasets through Machine Learning models such as regressors or classifiers to make predictions.
It is highly recommended to learn how the numbers are being represented and processed in AI systems so that we understand the basics of AI that help us to use AI efficiently and responsibly. This is accomplished by learning the underlying mathematics (to understand how the processing of data occurs), statistics (to understand how the models make decisions/predictions be it with large or small or complex datasets) and programming tools/languages (to understand how all the steps in these tasks are processed and output is generated).
Developing an understanding or expertise or versatility in these three areas helps to build a strong and responsible workforce in building sustainable AI applications. Below I go through in brief the topics that empowers understanding the underlying working of AI systems.
(I) Mathematics
The math behind AI is primarily based on Linear Algebra, Calculus, and Probability and Statistics. These mathematical fields provide the foundational tools for representing data, optimizing models, and handling uncertainty, which are all crucial for building and understanding AI systems.
Here’s a breakdown of each field and its role:
1. Linear Algebra:
(a) Data Representation:
- Linear algebra provides the framework for representing data as vectors, matrices, and tensors, which are essential for processing and analyzing large datasets in AI.
(b) Matrix Operations:
- Core operations like matrix addition, multiplication, and dot products are fundamental to how AI models process information.
(c) Dimensionality Reduction:
- Techniques like Principal Component Analysis (PCA), which utilizes eigenvalues and eigenvectors, help reduce the complexity of data by identifying key features and relationships.
Key Concepts:
Vectors, matrices, tensors, eigenvalues, eigenvectors, and matrix decompositions are all crucial components of linear algebra used in AI.
2. Calculus:
(a) Optimization:
- Calculus, specifically differential calculus, is used to optimize AI models by finding the best parameters that minimize errors or maximize desired outcomes.
(b) Gradient Descent:
- This is a fundamental optimization algorithm that uses derivatives (gradients) to iteratively adjust model parameters and find optimal solutions.
(c) Neural Networks:
- Calculus is essential for training neural networks, where complex mathematical functions with multiple variables are used to recognize patterns in data.
(II) Statistics
1. Probability and Statistics:
(a) Uncertainty:
- Probability theory helps AI systems deal with uncertainty and make predictions based on incomplete or uncertain information.
(b) Bayes’ Theorem:
- This theorem is used to update probabilities based on new evidence, enabling AI systems to learn from data and improve their predictions over time.
2. Statistical Distributions:
- Understanding statistical distributions, like the Gaussian distribution, helps AI systems model and analyze data patterns.
In essence, AI leverages these mathematical and statistical concepts to:
- Represent data in a way that machines can understand and process efficiently.
- Learn from data by identifying patterns and relationships.
- Optimize models to make accurate predictions and decisions.
- Handle uncertainty and make informed decisions in complex environments
(III) Programming
A skilled programmer can significantly impact AI development. While AI tools are becoming more sophisticated in code generation and other tasks, a deep understanding of programming fundamentals, problem-solving, and the ability to steer and refine AI systems are still crucial. Programmers are needed to design AI architectures, debug complex issues, and ensure the responsible and ethical development of AI systems.
Here’s why a programmer’s expertise is still vital:
AI’s Limitations:
- Current AI models, even advanced ones, still struggle with complex reasoning, nuanced understanding, and creative problem-solving that are inherent in human programmers.
Steering and Refinement:
- Programmers are needed to guide AI, correct errors, and ensure the AI is producing accurate and reliable results.
Specialized Knowledge:
- A strong programming background helps in understanding the underlying mechanisms of AI systems and allows for more effective utilization of AI tools.
Ethical Considerations:
- Programmers play a crucial role in ensuring that AI systems are developed and deployed ethically, addressing potential biases and unintended consequences.
Adapting to Change:
- As AI continues to evolve, programmers will need to adapt their skills and knowledge to effectively work alongside AI systems and leverage their capabilities.
Higher-Level Abstractions:
- Programmers can design better APIs and frameworks that simplify AI interaction and improve the overall efficiency of the development process.
Demand for Expertise:
- While AI may automate certain tasks, the overall demand for skilled programmers is likely to increase as AI adoption expands and new applications are developed.
Essentially, programmers are not being replaced by AI, but rather their roles are evolving to encompass new challenges and opportunities in the age of AI
(1) Programming Languages
Knowing how to develop robust and efficient programs helps to access and run the AI and ML libraries (Scikit Learn, TensorFlow, PyTorch, and several others) to process datasets. Though technology companies are providing AI services with packaged AI/ML libraries with easy to use drag and drop features, learning to write efficient programs is absolutely essential to build customized applications.
One cannot expect to escape from developing customized applications as an AI professional. As per the 80/20 rule, it is possible to accomplish 80% of the task with these packaged AI services. To carry out the rest 20% of the work, it is crucial to know how to write programs to take the AI project to completion.
Python has become a robust language for AI and ML applications with its huge number of built-in libraries. Apache Spark has built-in parallel-processing enabled AI/ML libraries written in languages such as Scala, PySpark which allows for faster running of AI systems for use-cases in Computer Vision, Natural Language Processing and such.
(2) Database technologies
Knowing how to manage databases can make a huge difference in back and forth cycles in AI. AI vector databases enable efficient storage, indexing, and querying of these vectors, making them a crucial component of many AI systems.
Database technology has evolved to address the challenges, giving rise to what are considered modern databases: vector databases or databases with vector capabilities. Unlike traditional relational databases, databases that support vectors are specifically designed to handle a wide variety of data, which allows them to support the increased workload of AI applications.
Vector databases transform various data structures and media types into vectors, mathematical representations that artificial intelligence can process easily. This transformation allows AI systems to identify similarities and patterns among different data pieces, such as finding images that look alike or texts with similar meanings.
Learn to use AI tools and technologies appropriately and efficiently
(1) Cloud Technologies
AI and Cloud are synonymous. Knowing to manage the cloud and writing efficient programs that would manage cloud resources are critical in becoming an expert and responsible AI programmer.
Cloud technologies commonly used in AI include cloud computing platforms like AWS (Amazon Web Services), Azure (Microsoft Azure), and GCP (Google Cloud Platform). These platforms offer a wide range of services for AI development, including machine learning, natural language processing, and computer vision. Specific tools and services include Amazon SageMaker, Azure Machine Learning, and Google Cloud AI Platform.
Cloud computing provides the infrastructure and resources necessary for developing and deploying AI applications. It empowers businesses to leverage the power of AI for a wide range of applications, from enhancing customer service with chatbots to improving business intelligence and enabling innovative solutions in various industries. It provides Scalability and Flexibility,, on-demand computing power and storage allowing AI models to be scaled up or down based on the workload. This is absolutely essential to reduce energy and power usage.
(2) Container technologies
In AI, container technologies like Docker and Kubernetes are heavily used to package, deploy, and manage AI models and their dependencies. Docker provides the containerization, while Kubernetes handles orchestration, scaling, and automation of these containers.
Containerization allows for packaging all the AI models, libraries and runtime environments into a single package or container. This ensures the AI model runs the same way across different environments such as cloud, local machine, edge services. It simplifies deployment and management of AI models so that it reduces development and deployment time.
Orchestration goes and adds a further layer to Containerization by automating the deployment, scaling and management of Docker containers. It enables efficient scaling of AI workloads and is absolutely essential to run data-intensive workloads.
There are several other Container and Orchestration technologies provided by technology companies to facilitate smooth running and scaling of AI systems, automation and for simplified management of AI infrastructure.
Conclusion
The world is going the AI-way and businesses are bracing up to go with the flow. Though this is an encouraging and exciting adoption, it comes at a cost in terms of increased power, energy and water usage and increased emission of Greenhouse gases from data centers that manage the cloud infrastructure to run AI systems. This demands us to use AI responsibly as stakes are high. In this blog, I discussed three approaches to becoming AI responsible. One is by careful selection of AI use cases, another is by learning well the theoretical and mathematical and programming concepts underlying the working of AI systems and the third is by learning to use AI tools and Technologies appropriately and efficiently. By equipping oneself in these three ways, we can build a sustainable and responsible AI-world. If AI is first, Responsible-AI is foremost.
Leave a Reply