top of page

AI in AI: the potential of artificial intelligence in alternative investments


Exhibit: the value-chain of intelligent automation and decision augmentation

Alternative investments are defined as investments outside of traditional asset classes such as fixed income and equities. As an asset class, alternative investments may include a wide range of themes such venture capital, private debt and equity, hedge funds, specialty finance, insurance-linked strategies, commodities, and real estate, but also more exotic topics such as art, collectibles, or most recently cryptocurrencies. In our working definition though, we narrow the scope of alternative investments to the active and unconstraint management of the capital structure (debt and equity) in private and public markets.


Intelligent automation and analytics

The active and unconstraint management implies that these asset managers do not follow a set tracking error to a defined index universe for performance benchmarking. They have got an open mandate for executing their strategy while the investment focus and the scope is well defined. These managers work across the corporate and M&A lifecycle, investing in start-ups, growth companies, specialty and opportunistic lending, activism and/or using sophisticated risk management strategies (hedging) to protect their investments. Several alternative asset managers have successfully onboarded multi managers with different strategies on a cross-collaboration platform.


Alternative investment is all about sound decision making, based on available information, good judgment, and careful monitoring and risk management of portfolio companies and trading positions. Available data is at the core of a successful investment process. The investment process from screening the universe, selecting, monitoring, and managing the investments may further be supported by mathematical algorithms. The recent advances in computational powers, big-data techniques, machine learning and most recently breakthrough in generative artificial intelligence (AI) augment decision making as well as enhance and automate processes across the investment lifecycle. AI has the ability to initially replace repetitive manual tasks, but also the potential to automate complex processes and workflows.


The data and machine-learning value chain

The investment process and lifecycle are organised across different functional segments, spanning the range of investment research and prospecting, deal origination, pipeline management, investment decision, portfolio monitoring, and risk management. The functional breakdown also covers fundraising and investor relations. The impact and productivity in this lifecycle must be optimised across a group of value drivers and performance metrics that can be measured by key-performance indicators. This optimisation can eventually be aggregates to two classes of capabilities: intelligent automation and decision augmentation. The exhibit above breaks the value chain down from data ingestions (availability), contextualising (structuring), analysis (insight generation) and capability building across different use cases.


Data ingestion

Data availability is key for success in investment and risk management across the capital structure. There are several technologies available that extract and organise data. Structured data is highly organised, and can be codified, sorted, and searched in files. Most financial data such as accounting statements or transaction records is structured. Although structured data can be analysed with traditional computing techniques, the sheer amount of data today provides an operational challenge to almost any organisation. Unstructured data, on the other hand, has no predefined model or organisation form. It can be found in emails, reports, news, social media, or any other form of written and oral communication. The individual data points have no clear and defined relationship with each other what makes it difficult to capture and organise it. Private financial and unstructured data are often not accurately and comprehensively captured – a key focus area for which the technologies introduced below can make a substantial difference. This process is also often referred to as data operations.


Optical Character Recognition (OCR) is a technology that extracts structured and unstructured data from PDFs such as an annual financial or a monthly investment report. It scans the PDF file and analyses the pixels to identify the characters and words. It can handle any type of PDF document. However, the OCR software may not always be accurate and may even introduce errors. It further can be resource intensive and time-consuming. Parsing, on the other hand, is a technology that extracts data from structured or semi-structured PDFs by analysing their internal structure and metadata. The parsing software reads the PDF file and identifies the elements and attributes. A main advantage of parsing is that it can extract data with high accuracy and reliability. The PDF document must have a consistent and well-defined structure though.


Data crawling is a method which involves data mining from different sources. It applies specialised technologies to collects data from multiple sources, such as webpages but also internal databases, legacy systems, and other repositories. Its objective is to build a comprehensive database that automates the data collection and can be used for analysis and decision-making. A data lake is a repository with data that is stored in its available raw format. A data warehouse is a system that stores highly structured information from various sources. It aims to combine disparate data sources to analyse the data and look for insights in the data sets.


Contextualising

Big data techniques are technologies to manage all types of datasets and transform them into business intelligence. There are many techniques available. At the core of the techniques are ML algorithms that can be used to automatically identify patterns and trends in large datasets. Algorithms are mathematical procedures to interpret data in specific ways and create predictable output based on data patterns. Many of these algorithms have contributed substantially to the advancement of AI. There are many and varied definitions of AI and the term is often interchange with ML. However, the term AI should be applied broadly to the replication of human decision making and analytical capabilities which assumes high level of human-like intelligence. ML in this sense is a specific application or key field of study of AI that uses models and algorithms, for the analysis, manipulation, pattern recognition and prediction of data.


Natural language processing (NLP) is a branch of ML that facilitates the interactions between computers and human natural languages. This technology allows to identify, organise, and analyse large amounts of unstructured data from written or oral language samples. It applies the principles of linguistic structure and breaks sentences down into their elemental pieces to identify semantic relationships among them. In essence, it captures the raw data and processes it to decision analytics, often in the form of quantitative metrics such as scores. The algorithms detect the polarity of sentences within a range of positive, neutral, and negative on a respective target entity.


Analytics

ML generates metrics to drive decision making and subsequent actions. The objective of these analytics is to discover useful patterns between different items of data. Descriptive analytics discover historical patterns to categorise data preferences. Predictive analytics illustrate what actions should be taken based on forecasts. Advanced analytics are the overarching term for data patterns that are fed into the model to generate fresh predictions on historical patterns and computational methods.


Sentiment scores use large amount of language and evaluate its positive or negative connotation through a pre-defined event methodology. The event classification is shaped by key words that imply a specific attitude associated with statements made in the respective language. News and blogs can be analysed accordingly and establish a view on the sentiment of the statements in the text. This may be combined with Bayesian probability theory to address uncertainty of specific circumstances through an underlying hypothesis. The probability for this hypothesis can be updated as more data becomes available during the process.


A large language models (LLM) develop the ability to understand and generate general-purpose language. They are often referred to as generative AI with its ability to create new output. LLM acquire these abilities by using massive amounts of data to learn billions of parameters during training. This process consumes large computational resources. Over the last couple of years, LLM had substantial breakthroughs with larger sized models such as GPT-3/4 from Open AI, LLaMa from Meta or PaLM from Google. These models can acquire embodied knowledge about syntax, semantics, and purpose inherent in natural language, and put in the broader context of human interactions.


Complexity science provides the methodology to analyse and simulate emergent phenomena on a systematic basis. It uses a multidisciplinary approach from mathematics to economic, political, and social sciences. Its goal is to discover dynamics, conditions, and constraints that can be generalised to overarching principles for the modelling and analysis of emerging events in an integrated systemic approach. Complexity science in combination with artificial intelligence, big-data and ML provides new capacities for modelling, simulation and decision making.


Capability-building across use cases

The different building blocks across the data and ML value chain allow to develop specific capabilities for strategic insights, forecasting and risk management across the investment process and lifecycle. In research, prospecting, deal origination and pipeline management huge amounts of information and data need to be collected, contextualised, analysed and documented. The technologies introduced provide the tool kit not only to automate but also to enhance the data set in preparation of the investment decisions and its risk management. The ML algorithms can provide recommendations, challenge certain assumptions of the investment hypothesis and executes swiftly specific queries. For investment monitoring, portfolio and risk management, the ML algorithms provide specific intelligence and analytics as early warning indicators. With the incoming sustainability requirements for environmental, social, and corporate governance (ESG), the ML algorithms can also match criteria and evaluate suitability within the guidance of the investment mandate. A Software as a Service (SaaS) platform offers an integrated interface with a set of functionalities and services to cover this lifecycle with its capabilities comprehensively.


Notes:

1) For literature references and overview, see section on "Resources and Further Reading" at the end of chapter four of "Transforming Financial Institutions"

2) Expression of gratitude and appreciation to the different team members at ACERTAS, impactvise, ChartIQ, quantmate, Nectar, Axeleris, North Base Media and many others with whom we worked on those topics

Comments


bottom of page