top of page

Augmented decision making

This blog focuses on augmented decision making, the second pillar of the financial industry’s “Transformation Agenda”. Advancements in decision making have been driven by artificial intelligence (AI) and big data. Natural language processing (NLP) and Bayesian machine learning (ML) allow the combination of traditional financial with alternative data which leads to more accurate predictions for trading, investment management and credit underwriting. Advanced analytics aggregate information and simplify the decision making. Intelligent forms of data visualisation provide insight and clarity during the process.


The second pillar of the transformation agenda

Risk taking and related decision making are at the core of what financial institutions do. Strategic and financial decision making theory have established a number of qualitative and quantitative methods that provide financial institutions with a comprehensive tool set. These methods can be descriptive but become powerful when they add a predictive layer. Predictive methods have been shaped by system theory, game theory or macro history. All these methods aim to identify patterns, establish principles and validate decisions in a consistent framework. Systems theory is a multi- and interdisciplinary approach that integrates different theories with their individual premises and dynamic behaviours. Its goal is to discover dynamics, conditions, and constraints that can be generalised to overarching principles across systems. Game theory allows the simulation of strategic interactions among rational decision makers. Macro history uses past events with the objective to extrapolate common factors to determine root causes and patterns through the comparison of proximate details. All these methods have in common that they need to process large amounts of data from different sources of information. Advances in AI and data science combined with higher computational power have fundamentally augmented the processing strengths and decision outcomes. In financial institutions, data-driven forms of AI have found their applications across the core functions of research and analysis, credit-underwriting and lending, securities trading and marketing, investment advisory, and risk and capital management. They have been broadly applied to assess and manage different forms of risk, price financial instruments, or execute trading and investment strategies.


Technology innovation

AI seeks to build autonomous machines that can carry out complete tasks without human intervention. There are many and varied definitions of AI and the term is often interchangeably used with ML. We apply the term AI broadly to the replication of human decision making and analytical capabilities. It assumes high levels of human‐like intelligence. ML is a key field of study of AI that uses algorithms for the analysis, manipulation, pattern recognition and prediction of data. Algorithms are the mathematical procedures that ML applies to interpret data in specific ways and create predictable output based on identified patterns.


Machine learning

ML is a specific application of AI that is underpinned by analytics. Descriptive analytics discover historical patterns to categorise data preferences. Predictive analytics illustrate what actions should be taken based on future outcomes. Predictive models use large sample of historic and pre-processed data for new cases that are fed into the model in order to generate fresh predictions. Historical data are combined with algorithms to determine the probable future outcome of an event. Such models have made substantial progress through advanced algorithm designs and available computation power. Artificial neural networks (ANN) represent the most sophisticated ML models. They were inspired by communication nodes and information processing in biological system (e.g. the structure of the brain). Advanced forms of ANN with multiple layers are classified as deep learning and solve today the most complex problems such as big data and NLP.


Big data

Big data is a collective name that refers to the use of immense datasets for ML applications. It covers relevance, complexity, and depth of a financial institution's data that come from internal and external sources. The core components that drive analytics start with the available data pool. The scale of available data has been increasing exponentially over the last 20 years, and the information overload does not only overwhelm individuals but the entire organisation. ML allows to process large data with mathematical accuracy and objectivity which leads to unbiased results, substantial efficiency gains and new insights. Internal datasets are often propriety and non-public whereas external data are usually publicly available. To create a computer set‐up that can learn, large datasets are required. Pattern recognition in a nondeterministic manner can happen only if the algorithm has access to a simple subset to make predictions of the dataset as a whole. Structured data are highly organised and can be codified, sorted, and searched in files. Most financial data such as accounting statements or transaction records are structured. Unstructured data, on the other hand, have no predefined organisation form. They can be found in emails, reports, news, social media, or any other form of written and oral communication. Unstructured data can be captured and processed by NLP.


Natural language processing

NLP is a branch of ML that facilitates the interactions between computers and human natural languages. This technology allows the identification, organisation, and analysis of large amounts of unstructured data from written or oral language samples. It applies the principles of linguistic structure and breaks down sentences into their elemental pieces to identify semantic relationships among them. In essence, it captures the raw data elements and processes them to decision analytics, often in the form of quantitative metrics such as scores. An applicant‐dependent analysis is performed by the NLP on target and sentiment recognition. The algorithms detect the polarity of sentences within a range of positive, neutral, and negative on a respective target entity. The entity recognition may target company names but also influencers of a specific topics with regard to the company's situation and developments such as c‐level executives and equity analysts (the influencers). Sentiment data are an output of this process. It uses large amount of language and evaluate its positive or negative connotation through a predefined event methodology. The event classification is shaped by key words that imply a specific attitude associated with statements made in the respective language. News and social-media blogs can be analysed accordingly and establish a view on the sentiment of the statements in the text.


Advanced analytics

NLP facilitate the process of interpreting unstructured data and establishes with sentiment scores an alternative data set. Alternative data is classified as a form of data through which specific insights are gained during the decision-making process. Such insights are often referred to intelligence, e.g. in the context of financial decision making as market (macro) and company (micro) intelligence. The combination of alternative data with probabilistic optimisation leads to the development of advanced forms of decision algorithms.


Bayesian probability theory allows the modelling of uncertainty in a sensible way. It measures unfamiliarity and enables appropriate subsequent decisions to be made under the respective circumstances assessed by the model through an underlying hypothesis. The probability for this hypothesis can be updated as more data become available during the process. This is called Bayesian inference, which allows the dynamic analysis of sequences of the same dataset. In the context of financial decision making, the Bayesian framework allows description of the collective behaviour of a stochastic variable such as an asset in a unified manner through underlying probability distributions. This enables the simultaneous assessment of risk and uncertainty. Unstructured data can be collected and processed to a decision metrics before combining it with traditional structured data. The implementation of an overarching ML algorithms then optimises the decision analytics.


A credit application or an equity investment can be evaluated with regard to the statements made in news and social media by influencers in terms of its environmental, social, and governance (ESG) criteria and respective sentiment scores be calculated. Each ESG criteria will then define an event for the NLP mechanism to identify and analyse the available information feeds from numerous sources. This set of unstructured data may further be combined with traditional financial analysis and evaluated by Bayesian inference. The Bayesian method allows the analyst to adjust the originally assumed beliefs (expressed in probabilistic terms, i.e. its likelihood given the momentarily available information) about the sustainability profile of the applicant (for credit) or investment target (for equity). The analysis assesses the impact with regards to the respective expected loss and return expectations, and integrates the insights of the unstructured data in a comprehensive decision framework.


Data visualisation

The decision-making process with its workflow can further be enhanced through data visualisation. Data visualization is a powerful tool that can add another cognitive dimension to the decision making. It focuses on the graphical representation of the data that may lead to new insights. Flexible charting applications pick-up different data sets and establish a reference point for the statistical interpretation of an investment situation. They encode the data as visual objects, expressed by lines, bars and ranges and mapped to specific attributes. This form of cognitive interpretation makes the data visualisation an effective tool for decision making.


High performance analysis looks at time series and defines comparable statistics for the visual analysis of different data points such as asset price bands, discretely or continuously calculated returns, correlation and higher statistical moments such as volatility, skew and kurtosis. It can be applied in a pre-trade analytics setting to execute a transaction (e.g. technical analysis); or a post-trade analysis to analyse the different cost layers of a transaction or comply with regulatory requirements such as Dodd-Frank in the US, and EMIR and/or MiFID II in Europe. Charting can be extended to multi-dimensional visualisations such as term-structure analysis for interest rates and surface analysis for implied volatility of financial options. As a discipline, data visualisation is considered as a branch of descriptive statistics. The extrapolation of the time series (an important use case for technical analysis) but also the incorporation of alternative data sets allow the introduction of predictive dimensions to the analytical framework. This predictive extension requires advanced design, statistical and computational skills.


Lightweight technologies such as HTML5 templates with a JavaScript (JS) engine allow the flexible integration of charting applications in the decision-making workflow. By using a single library of code, the ubiquitous JS runtime powers charting applications that can easily be customised across different user-experience requirements and installed across different platforms (user interfaces) such as desktop, browser and mobile. JS allows for rapid development and deployment, and brings the defined use case and decision-making paradigms closer to the users’ familiar web experience on a daily basis.


Conclusion

Financial decision making is fundamentally driven by the objective to assess risk through predicting macro- and microeconomic events. With system and game theory as well as macro history, strategy and economic theory have established a number of methods that shape today a comprehensive analytical framework. The combination of NLP with Bayesian ML to predict, and the application of lightweight technologies to visualise data have advanced the accuracy and objectivity in executing it. Computational power provides the processing capacities that remain key given the exponential growth of data volume and available information over the last 20 years.


The book “Transforming Financial Institutions” aims to contribute to ongoing discussion about the financial industry’s transformation agenda with its three pillars to facilitate commercial growth and operational change. The two blog series in combination with the book follow the objective to provide decision makers a comprehensive framework to drive their own institutions’ transformation agenda. Further information to the book and related thought leadership series can be found on www.eesadvisory.com/our-insights. The next blog published under this series will focus on digital financial innovation, the third pillar of the transformation agenda.



Коментари


bottom of page