Skip to main content

☒️ Risks and Limitations of AI

As AI becomes a more integrated piece of our society, you must become familiar with the risks it poses. Many of the risks are technical, but the scariest ones are human. The current consensus is that AI is not ready to make unsupervised decisions. But, many users see it as a cure all, instead of a tool. (You're definitely smarter than that)

Artificial Intelligence (AI) is advancing and changing the way we live and work. As with any new technology, there are also risks associated with the use of AI.

AI does What we Ask​

The biggest challenge with AI is that it doesn't understand our world. It only understands the inputs we give it and solves the problems we ask it to. Unless we specify our problems with great detail, we see issues with how they execute.

Check out this video to see how AI can struggle. This video does a good job illustrating the concepts that AI follows exactly what its told. Remember AI has no judgement, it jsut follows instructions.

Bias & System Design​

One of the major risks associated with AI is bias and system design. AI systems are only as unbias as the data they are trained on. If there is bias in the data, then the AI system will also have bias. This can lead to unfair and discriminatory outcomes, especially in areas such as hiring, lending, and criminal justice.

Systems need to balance building value, and systematic inequality. This is the hot button issue of AI. Responsible AI is more than a phrase, it is what determines useful AI.

Because our data is so limited we use proxies. These proxies aren't the same as using real world data

We haven't yet cracked exactly how to do this, and need resources dedicated to this problem. Particularly, when the systems exist at a societal level. The biggest struggle with AI is that it reflects our bias back to us.

If you want to read more about responsible AI and bias, check out this MIT Technology review article with William Isaac, a research scientist on ethics in AI for Deep Mind.

Incorrect Responses​

AI researchers have a very distinct term for when AI decides to make up facts, "hallucination." One of the traps of consumer use of AI are unsubstantiated responses.

It is important to remember that AI systems have never stepped into the real world. They only understand based on our description of "meat space." This means that it is liable to make up answers on loose correlations.

Some tools, like Dust.tt are working on solutions to get around and connect them to reality. Another solution is fine tuning datasets on facts. So for specific use cases, facts can be marked as true and it will figure out how to use them correctly. As you can tell, this isn't a perfect solution.

All this is to say, it is crucial not to rely exclusively on AI. When we talk about our process for using AI, one of the most important steps is to verify.

Lack of Citations​

Because we don't know exactly where data is coming from, we run the risk of plagiarizing and spouting false infromation. This is particularly important in the realm of text generation, where the tendency is to take answers as truth. In the first iteration of GPT, this has been a massive issue. If you ask for citations, it will confidently give you fake links.

One solution comes from DeepMind, with their technology Sparrow. Sparrow cites within the chat interface, and defers conversations that aren't conducive to AI responses.

Outsourcing Critical Thinking​

Your biggest advantage is your ability to think creatively and critically. Critical Thinking is the analysis of facts, evidence, observation and arguments to form a judgement. AI accepts data as is, it doesn't (yet) have the ability to examine inputs at face value and make decisions about them. Luckily for us, this is a human super power.

Because AI doesn't give a log of how it came to a conclusion, as a user you must understand your outputs and check that they make sense in context. Accepting information from AI without verifying is dangerous.

As AI writing becomes more common, abuse is inevitable. This can lead to disinformation, low signal content, create a buffer between stakeholders. It is crucial to take this into consideration, and evaluate AI written content for what it is.

Staying vigilant is the biggest challenge with adopting AI at an individual level.

Data Privacy​

One issue that has come to top of mind is data privacy. Because AI requires massive amounts of data to become functional, data is the oil of our time, ask the Economist.

When using OpenAI tools, it is important to note how your data is being used. Your prompts are held for 30 up to 30 days, and can be used in a human review process. However, if you fine tune a model, the data that it is Fine Tuned with is stored in its own instance. If you're curious, Microsoft talks about it, here.

One of the issues with the value of data is that it is often harvested without the consent or understanding of the consumers providing the data... you.

Power Consolidates​

The biggest companies are consolidating power. As we mentioned in the AI Impacts section, most of the largest technology companies are investing heavily in AI. This sets the table for them to continue to consolidate power, and

AI going super-human​

This is the terminator risk. We don't need to dive in... get creative.

Terminator