The opinions expressed by Entrepreneur contributors are their own.
Although AI has been around since the 1950s, it has only become a household name since OpenAI launched ChatGPT less than two years ago. As an investor in both OpenAI and xAI, I constantly wonder about the ethical implications of this new technology, how intelligent it is, and whether it is the best path forward for humanity. As an experiment, I asked ChatGPT for his opinion on artificial intelligence (AI) systems in today's society and this was his response:
“As a language model for artificial intelligence, I have no personal opinions or beliefs. However, I can provide an objective analysis of AI systems in today's society. AI has brought numerous benefits to society, such as improved efficiency, accuracy, and decision-making. It has also contributed to advances in medical research, transportation, and various other industries. However, the use of AI systems in today's society also poses potential risks, such as job loss, privacy concerns, and the possibility of biased algorithms. Therefore, it is critical to ensure that AI is developed ethically and responsibly to mitigate these risks and ensure that the technology benefits society as a whole. In summary, AI systems have tremendous potential to have a positive impact on society, but their use must be done with careful consideration and ethical responsibility.”
Related: The only downside to AI is your mindset. Here are 4 things you need to change your mindset — and use AI for your success.
While this reaction shows that AI is still robotic and objective in its reasoning methods, it is also well aware of the ethical concerns associated with its use. At this critical point in history, where we are on the brink of mass adoption of AI, it seems we still cannot decide whether AI will be our savior or our villain. Ultimately, it all comes down to the application and responsible management of AI systems.
As an investor, I am in a unique position to be able to help these new AI companies move forward. Over the last two years, we have seen a gold rush of AI startups. Many have closed impressive funding rounds, many have failed, and some have already been acquired. However, the first two I invested in, OpenAI and xAI, are now well known. The former for its ability to meet the challenge of global mass adoption, and the latter for its commitment to building a more ethical system for the AI environment.
When it comes to AI applications, it's safe to say that humans have always strived to create tools that make their lives easier; this is nothing new. In the past, “mechanical muscles” and outsourcing have shifted physical labor. Now we're at a point where “mechanical minds” will unleash a flood of applications across all industries. Automation has evolved from more than just the physical.
One key development driving AI adoption is Microsoft's continued investment in OpenAI. This partnership will likely result in everything from optimized Excel files to AI-generated PowerPoint files and even more email management support. Even upper-level work is now ripe for disruption.
In finance, AI can be used in a number of ways. AI algorithms can identify behavioral patterns such as unusual transactions or identity theft, allowing financial institutions to quickly detect and prevent activity. AI can also analyze market trends and make assumptions about future trends, allowing institutions to make more accurate decisions.
While this is just one example of the shift in the industry, there are some areas of work that will be less impacted by AI. These include industries with information asymmetry – such as early-stage venture capital. For AI to work effectively, the model must have access to data. Industries whose data is private, siloed, and complex cannot train and draw conclusions in the same way as a model based on publicly available information. The strength of AI depends on the value of the underlying data and models, which in turn depend on the quality of the rules set by the humans who develop the algorithms.
In terms of management, as AI becomes more widely adopted, we must continue to maintain a certain level of skepticism and criticism. These tools will evolve, but they should not be viewed as an all-knowing source of truth. Again, this is critical because these systems only know what is going on in their underlying systems. As humans, investors, and business-minded individuals, we must recognize the strength of these systems while also being mindful of the fact that they require constant maintenance. While AI uses an ever-evolving algorithm that learns from itself and from experience, we must continue to adapt the rules and data sources that support these AI systems.
Related topics: AI will radically change the workplace – How HR teams can prepare
For those who can develop these systems, diversity of thought and perspective is critical. A system created by people with biases will also be inherently biased. We need to ensure that the systems are governed by human values. In today's society, where truth seems to be subjective, believing in these systems can be both extremely beneficial and extremely damaging. We need to refrain from programming systems to tell people what they want to hear. It is important that people understand and respect the limitations of AI.
At the same time, just because humanity is capable of doing something doesn't mean it should. We could probably replace most jobs with AI, but should we? Where do we draw the line? How do we ensure honesty and integrity in future systems and platforms? This is part of what xAI and OpenAI are trying to address, and the reason I believed in and supported them from the beginning.
AI will undoubtedly change all of our lives, but this change will only be positive if we continue to be critical stewards of truth and information. As an investor, I advise everyone to maintain a healthy dose of skepticism when investing in AI-powered solutions. Look at the people behind these systems, because their beliefs are what teach and drive the solution.