For more on artificial intelligence (AI) applications in investment management, read The Handbook of Artificial Intelligence and Big Data Applications in Investments, by Larry Cao, CFA, from CFA Institute Research Foundation.
ChatGPT has launched a new era in artificial intelligence (AI).
The chatbot built by OpenAI and powered by the GPT-3 and GPT-4 families of large language models (LLMs) responds to natural language prompts much like a very well-informed human assistant and has continuously evolved with the introduction of GPT-4 and ChatGPT APIs and plugins.
Other tech giants haven’t sat idly by. Google and NVIDIA, among others, have shown their commitment to the rapidly evolving technology by announcing a series of innovative generative AI (GenAI) services in recent months. Indeed, each week it feels like the AI industry is experiencing a year’s worth of progress.
But what does it mean for investment management? How will all the ChatGPT- and LLM-related developments affect how investment professionals work?
ChatGPT: An Overview
ChatGPT is an AI language model developed by OpenAI using a technique called reinforcement learning from human feedback (RLHF) that processes natural language prompts and provides detailed responses based on human input.
GPT stands for Generative Pretrained Transformer architecture. It is a type of GenAI that can produce new data based on the training data it has received. The leap from natural language processing (NLP) to natural language generation represents a significant advancement in AI language technology.
The model pre-trains on vast amounts of data to learn how to respond quickly to queries. For example, GPT-3 has over 175 billion parameters. GPT-4 has even more. Nevertheless, both models are limited by their training data’s cutoff date and cannot incorporate new and time-sensitive information in real time.
The transformer architecture is a deep learning technique applied by both ChatGPT, to extract and analyze textual data, and the Bidirectional Encoder Representations from Transformers (BERT) language model, developed by Google.
The different components of the GPT architecture work in synchrony to achieve better outcomes.
ChatGPT Learning Methods
ChatGPT is a conversational AI model built on the GPT series, either GPT-3.5 or GPT-4, for use in conversational applications. Fine-tuned on conversational data, it can better generate relevant, engaging, and context-aware responses.
The GPT model is first trained using a process called “supervised fine-tuning” with a large amount of pre-collected data. Human AI trainers provide the model with initial conversations between a questioner and an answerer. This process is like personal training for an AI assistant.
After this, the model undergoes reinforcement learning (RL), which involves creating a reward mechanism and collecting comparison data consisting of two or more model responses that are ranked by quality.
To further refine the model, OpenAI collected data from conversations between AI trainers and the chatbot. It randomly selected a model-written message, sampled several alternative completions, and had AI trainers rank them. Using these reward models, OpenAI fine-tuned the model with Proximal Policy Optimization (PPO) and performed several iterations of this process to improve the model’s performance.
ChatGPT’s Limitations
ChatGPT’s shortcomings are well-known. It may provide plausible sounding but incorrect or nonsensical answers due to the limitations of RL training. OpenAI acknowledges that there is currently no single source of truth for RL training and that ChatGPT is designed to answer questions to the best of its abilities rather than leave them unanswered. The quality of its responses depends on the question’s phrasing and the information ChatGPT has learned through supervised training.
ChatGPT does not have values in the same way that humans do. While it has been trained to ask clarifying questions to ambiguous queries, it often guesses at the user’s intended meaning. OpenAI has made efforts to prevent ChatGPT from responding to harmful or inappropriate requests, but the LLM may exhibit biased behavior at times. That’s why it’s crucial to avoid illegal, unethical, aggressive, or biased suggestions and forecasts.
ChatGPT can also be verbose and overuse certain phrases, often stating that it’s a “large language model trained by OpenAI.” The training data used to develop the model has biases and over-optimization issues, and trainers may prefer longer answers that appear more comprehensive.
While ChatGPT and other language models are generally excellent at summarizing and explaining text and generating simple computer code, they are not perfect. At their worst, they may “hallucinate,” spitting out illogical prose with made-up facts and references or producing buggy code.
LLM Scaling Laws, Few-Shot Learning (FSL), and AI Democratization Potential
GPT models offer unique features that distinguish them from BERT and other mainstream AI models and reflect the evolution of AI applications for NLP.
Like GPT, BERT is a pre-trained model that learns from vast amounts of data and is then fine-tuned for particular NLP tasks. However, after pre-training, the models diverge. BERT requires fine-tuning with task-specific data to learn task-specific representations and parameters, which demands additional computational resources. GPT models employ prompt engineering and few-shot learning (FSL) to adapt to the task without fine-tuning. With GPT-4’s pre-training data, GPT models can generate appropriate outputs for unknown inputs when given example tasks.
Scaling laws, which Jared Kaplan, et al., have highlighted, are among GPT models’ essential features. Performance improves as model size, training dataset size, and the computing power used for training increase in tandem. Empirical performance has a power-law relationship with each individual factor when not bottlenecked by the others. GPT-4 follows this law and can achieve high performance without fine-tuning, sometimes exceeding previous state-of-the-art models. Moreover, scaling laws work with other media and domains, such as images, videos, and mathematics.
The features of GPT models represent a paradigm shift in AI development away from traditional models trained for each specific task. GPT models do not require large local computational resources or additional training data, and tasks are tackled through FSL rather than model fine-tuning or retraining. However, a limited number of players — Google, Amazon, and the like — could control the supply of large language models (LLMs) on cloud computing platforms, which could create an oligopoly that hinders the democratization of AI development.
Does ChatGPT Create or Destroy Human Jobs? The Potential Use Cases
ChatGPT as an AI language model does not steal human jobs in the traditional sense. It is a tool designed to assist humans in tasks that involve language processing, such as generating text and answering questions. While ChatGPT can automate certain functions and reduce the need for human involvement in them, it can also create new jobs that require AI, data analysis, and programming skills.
AI cannot yet replicate human behavior across a number of dimensions, including originality, creativity, dexterity, empathy, love, etc. These are essential components of many jobs that require human connection, intuition, and emotional intelligence. AI tools work best on well-defined repetitive tasks where efficiency is important. This includes data entry, transcription, and language translation.
The risk of replacement by ChatGPT or other AI is higher for positions that rely more on natural language or involve repetitive, automated tasks such as customer support desks and research assistants. However, roles that require unique decision making, creativity, and accountability, such as product development, are likely to remain in human hands. While originality and creativity have no easy definition, we humans should focus on tasks that we are good at, enjoy, and can perform more efficiently than machines. As Alan Kay said, “The best way to predict the future is to invent it.”
Although machines can assist with decision making and persuasion, humans may be better equipped to conduct groundbreaking discoveries and exercise responsibility for their actions. In investments, ChatGPT may provide assistance rather than full automation.
Potential ChatGPT Use Cases for Investment Professionals
Investment Research and Portfolio Management |
Synthesize investment stories. Draft investment commentaries. Translate, summarize, and augment research reports. Assist computer programming to automate data handling. |
Portfolio Advisers, Wealth Management |
Write personalized investment advice for clients. |
Marketing | Produce investment content for clients. Create press releases, marketing materials, and websites. |
Client Support | Respond to client queries. Conduct sentiment analysis on client communications. |
Legal and Compliance |
Draft contracts. Review marketing documents for compliance guidelines. Generate ideas for compliance program. |
Process Automation and Efficiency |
Automate routine documentation, data processing, and other tasks. Optimize trade execution with natural language instructions. |
What Are the Risks?
Is ChatGPT capable of artificial general intelligence (AGI)? Microsoft Research claimed that the latest OpenAI LLM shows “sparks” of AGI. But opinions vary as to whether ChatGPT or GPT-4 represents a significant step toward AGI. Of course, AGI definitions vary. That’s why we believe it’s too early to make a judgment based on limited and short-term trends.
To be sure, implementing governance, legal and compliance, and ethical systems around AI in a democratic manner will be critical. As Microsoft’s Satya Nadella put it, “Fundamentally, AI must evolve in alignment with social, cultural, and legal norms in a democratic society.”
Inequality could also pose a dilemma when it comes to data and computing power. The gulf between the haves and have nots could lead to conflict and societal fractures if it grows too large.
For his part, Bill Gates is excited about ChatGPT and recent AI advancements. Indeed, he thinks AI can help reduce inequality by improving productivity in health care and education. But he also understands how it could exacerbate inequality if the benefits aren’t more evenly distributed. To ensure that AI contributes to a more equitable society may require a combination of funding and policy interventions.
The Dawn of the GenAI Era
GenAI, like ChatGPT, can generate new data that resembles its training data. While ChatGPT specializes in NLP, other GenAIs can produce data related to images, three-dimensional objects, and sounds if not to touch, taste, and smell.
Microsoft, Google, Adobe, and NVIDIA have all announced ambitious GenAI projects. Microsoft, which has a partnership with OpenAI, recently unveiled the Microsoft 365 Copilot, an AI-powered addition to the Microsoft Office suite. Google plans to integrate GenAI features into Google Workspace. Adobe has launched Adobe Firefly, and NVIDIA has introduced cloud services to help firms develop GenAI.
What’s Next?
The dawn of the GenAI era marks the beginning of a transformation in how investment industry professionals and other white collar professionals do their jobs. Those who leverage AI as their copilot will boost their productivity, while those who fail to embrace this revolution risk losing their competitive edge. As various fields integrate AI, the technology will redefine the workplace and lead to new standards of efficiency and effectiveness.
Sam Altman, the CEO of OpenAI, the creator of the ChatGPT chatbot, has tried to manage expectations: “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” he said. He may be right in form if not substance. ChatGPT is just one incarnation of a rapidly evolving technology. But it is a harbinger of the transformation that is coming. We need to get ready.
For further reading on this topic, check out The Handbook of Artificial Intelligence and Big Data Applications in Investments, by Larry Cao, CFA, from CFA Institute Research Foundation.
If you liked this post, don’t forget to subscribe to the Enterprising Investor.
All posts are the opinion of the author(s). As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.
Image credit: ©Getty Images / Olivier Le Moal
Professional Learning for CFA Institute Members
CFA Institute members are empowered to self-determine and self-report professional learning (PL) credits earned, including content on Enterprising Investor. Members can record credits easily using their online PL tracker.
Michinori Kanokogi, CFA
Michinori (Mitch) Kanokogi, CFA, is head of solutions research at Nissay Asset Management. He leads the research and development of digital investment solutions for investors. Previously, he led the launch of AQR Capital Management’s Japan business as head of investment management. He also has experience in multi-manager investment at Russell Investments, equity portfolio management at UBS, and management consulting at PwC and Deloitte. He translated several finance/AI books, including Advances in Financial Machine Learning, Expected Returns, and Beyond Diversification. He holds a bachelor’s of English from the University of Tokyo, a master’s of English from Kyoto University, and an MBA from INSEAD. He is a CFA charterholder.
Yoshimasa Satoh, CFA
Yoshimasa Satoh, CFA, is a director at Nasdaq. He also sits on the board of CFA Society Japan and is a regular member of CFA Society Sydney. He has been in charge of multi-asset portfolio management, trading, technology, and data science research and development throughout his career. Previously, he served as a portfolio manager of quantitative investment strategies at Goldman Sachs Asset Management and other companies. He started his career at Nomura Research Institute, where he led Nomura Securities’ equity trading technology team. He earned the CFA Institute Certificate in ESG Investing and holds a bachelor’s and master’s degree of engineering from the University of Tsukuba.