The Brave New World of AI Is Upon Us

Why You Should Care About AI & Regulation

Why You Should Care About AI & Regulation

Like everyone else in tech, ChatGPT has taken over my household. I swear, over the past few weeks, we’ve used it for everything from recipes to settling marital arguments. We’ve been spending a lot of time heads down raising for the fund so honestly using ChatGPT been a fun distraction for us. (While we’ve been doing all this fundraising I have been listening to this week’s playlist…)

It’s fun to see AI become more accessible to the masses and layman-friendly. I’ve working with AI for healthcare since 2019, when I built a pregnancy risk score that leveraged AI and ML models.

But at the same time, I see much to be concerned with from the regulatory side. For the next few weeks, I will be doing a deep dive on the concerns with AI that could lead to a heavy regulatory structure, the regulations that are currently in place, and what I expect to see in different sectors in the near future surrounding AI regulations.

Nobody debates that AI is coming down the tracks—like a runaway freight train. Unfortunately, very few people are thinking about what the accompanying regulatory landscape will look like. This really means diving deep into emerging areas like the broader regulation of algorithms. This is where the EU, OECD, IEEE and other supra-national bodies have already started diving in.

AI is currently regulated primarily through self-regulatory efforts, such as industry-led codes of ethics and standards, developed through collaboration between the private sector, civil society and governments. These efforts are coalescing around frameworks like the OECD AI Principles, which seek to provide a set of ethical principles and guidelines for the responsible implementation and use of AI. In some countries, national laws, regulations, and guidelines have already emerged, such as the UK’s Ethically Aligned Design project and the European Union’s proposed AI Regulation.

Unsurprisingly, it looks like the US will be lagging in the space, even though we have taken the lead in developing AI companies and products for the world. One clear bright spot for us domestically is that AI Bill of Rights that was created by the White House in October 2022.

Many people don’t realize this, but in 2017, Elon Musk was one of the first folks to call for AI to become highly regulated. He previously stated “Mark my words, AI is far more dangerous than nukes…why do we have no regulatory oversight?” He is both wrong and right on this front. While things like ChatGPT are being used for innocuous and humorous use cases now, the technology is powerful enough to cause some pretty dangerous aftershocks to mankind.

Now, if you ask most people in technology how they feel about AI regulation, you most likely get one of the following responses:

  1. AI should be subject to moral and ethical regulations, similar to those governing human behavior.

  2. Companies should be held liable for damages and losses caused by autonomous AI systems and robots.

  3. Governments should put in place regulations to prevent the monopolization of AI technologies.

  4. Artificial intelligence should only be developed and used responsibly, with clear guidelines and restrictions.

  5. Governments should enforce mandatory safety testing and certification of AI systems prior to deployment in the real world.

It’s interesting to see these responses when viewed in parallel with the most common issues that ethicists have with AI.

Most experts have identified the following ethical concerns surrounding the use of AI, including:

  1. Bias: AI systems can perpetuate, and even amplify, societal biases if they are trained on biased data.

  2. Transparency: It can be difficult to understand how an AI system is making decisions, which can make it hard to trust or hold the system accountable.

  3. Control: As AI systems become more advanced, there is a risk that they could become autonomous and make decisions that humans may not agree with.

  4. Job displacement: AI has the potential to automate many jobs, which could lead to widespread unemployment.

  5. Privacy: AI systems can collect, store, and use large amounts of personal data, which raises concerns about how that data is being used and protected.

  6. Security: AI systems are vulnerable to attacks and could be used to spread misinformation or launch cyberattacks.

  7. Explainability: The decision-making process of AI systems is often opaque, making it hard to understand how the system arrived at a particular decision.

  8. Lack of regulations and laws to govern the use of AI.

  9. Ethical use of AI in fields such as healthcare, criminal justice, and warfare.

As an economist and legal scholar, my biggest concern around AI is the fact that it’s created by mankind. Whether we like it or not, we have all kinds of biases that go into our work. Algorithms being created for any financial, healthcare, educational, real estate/housing, or other products most likely have built in biases that make already marginalized folks live with the results of an ever unfair society - which is the exact opposite of what most AI based companies and teams are trying to do. From my experience, most builders in this space are very thoughtful and altruistic (though naive) towards the bias issue.

One way biases can be introduced to AI algorithms is through the data that is used to train them. If the data used to train an algorithm is not representative of the population it will be used on, the algorithm may make inaccurate or unfair predictions. For example, if a facial recognition algorithm is trained on a dataset of mostly white faces, it may have difficulty recognizing faces of other races. Similarly, if an algorithm used to predict loan defaults is trained on a dataset of mostly white borrowers, it may be less accurate at predicting defaults for borrowers of other races.

Another way biases can be introduced to AI algorithms is through the design of the algorithm itself. For example, if an algorithm is designed to optimize for a certain metric, such as accuracy, it may make decisions that are unfair to certain groups of people. For example, if a criminal justice algorithm is designed to optimize for the number of crimes solved, it may lead to higher rates of incarceration for certain groups of people, such as minorities.

Regulations can be used to prevent biases from harming people by ensuring that AI algorithms are developed and used in a responsible manner. One way to do this is by requiring that AI algorithms be tested for biases before they are deployed. This can be done by evaluating the algorithm on a diverse dataset and comparing its performance for different groups of people.

Another way to prevent biases from harming people is by requiring that AI algorithms be transparent, which means that the decision-making process of the algorithm can be understood and explained. This can be done by providing interpretable models, which allow for the reasoning behind the algorithm's decisions to be understood. Additionally, regulations can also be used to ensure that AI algorithms are used in a way that is consistent with human rights and ethical principles.

Additionally, regulations can be used to ensure that organizations that use AI algorithms take steps to mitigate any potential harm caused by biases. For example, regulations can require organizations to appoint a responsible person or team to oversee the use of AI algorithms, ensure that data used to train AI algorithms is diverse, and provide regular training and education to employees on how to detect and address biases in AI algorithms.

Pwew…I realize that is alooooot so thanks for sticking around for all that. Next week I’ll be doing a pretty intense deep dive into the regulatory impacts that each sector using AI should expect to see and the downstream economic/commercial/go-to-market effects of these regulations. I'm excited to be doing more long-form content so let me know how these first few weeks newsletters are hitting versus your expectations. Also, we're starting to do more work on what the community and events look like for the now almost 10,000 subscribers we have so feel free to email me ([email protected]) with any and all suggestions you might have!