Blog
AI in forensic sciences
There is no doubt that AI has a role in forensic sciences, which implies the need to increase our understanding of its impact. AI and emerging technologies is, for example, a part of the European Forensic Science Area 2030 vision for 2030 , although 2030 seems a bit too far away considering the current digitalisation pace. Some thoughts and insights are presented in this Section, but for a more in-depth examination there is, for example Geradts and Franke that presents state-of-the-art applications.
There are many examples that show the rapid development of technology. An example is the development of airplanes from the early wooden ones more that 100 years ago to today’s jets. In the same way, forensic science has undergone development and new analysis methods continue to pave the way for making the most of physical and digital traces from crimes.
There are several types of AI methods, and more to come. However, AI based methods still don’t represent the capability of our human brains. The human brain can be modeled as two different systems, Kahneman , one fast system that operates non-consciously, and a second slower conscious system that allocates attention when demanded. It is reasonable to view today’s AI system as essentially only mimicking the non-conscious part of the human brain.
A simple example is related to driving a car. AI methods used for autonomous driving are “non-conscious”. They cannot handle unexpected events that they are not trained for. We, on the other hand, can invoke our conscious system and immediately process the new situation and act. However, if we are tired, or unfocused, we can find ourselves driving to our old workplace instead of to our new job, only using the non-conscious brain function. Thus, we need to be aware of the limitations of AI systems. AI systems beat us humans in other areas, such as handling and remembering extremely large amount of heterogeneous information that is beyond the capability of our human brains. Nowadays, AI methods beats us in some forensic areas such as the domain of language and image recognition .
This implies a need to revisit man-machine-cooperation. We must not underestimate the use of AI, e.g. to cope with the enormous amount of data. An example is the challenge to handle the number of all possible combinations of connecting the world’s 40 billion IP addresses to search for criminal acts on internet. We need not overestimate AI either, as there always will be a need for forensic scientists in the wake of digital transformation, e.g. as being responsible for reports and findings and the use of AI based tools. Optimizing human-based and computer-based methods for forensic science involves leveraging the strengths of both approaches to enhance the efficiency and accuracy of investigations.
We analyse traces and convert into vectors of digital information and feed such strings derived from e.g. fingerprints and DNA into various classification algorithms to search for matches in databases that can identify a particular person or object. In Predictive AI, e.g. supervised learning, we add some logic and train the system to recognise what we are looking for as result, using labeled datasets to train algorithms to accurately classify data or predict outcomes.
In generative AI, we reverse the process and instead feed results into the system, which allows us to generate new content based on what we have fed into the system. Generative AI is widely used e.g. for creating images and video from text, in large language models and to generate code, and it is likely that we’ll be flooded with Generative AI content also in crimes. ChatGPT and similar systems are trained to follow our prompted instructions to provide a detailed response. But, we need to be aware that Generative AI generates information based on what we have fed into the system.
If it’s sparsely trained on a specific subject, this implies that we cannot fully trust the response to be either true or accurate. We now also face the development of specialised “ChatGPTs” to be used in specific applications. So does the criminals, and one example is WormGPT designed to assist criminals with their hacking and programming endeavors and allow for engaging in malicious activities. The models behind ChatGPT and similar systems, are being widely used as the cost for training still is too high.
A question raised is software reliability. How do we e.g. get a grip of the error rate in AI generated code? The next expected development is to move from AI apps to AI assistants, combining the features of Generative AI in new ways. One example is AI enabled conversations, combining several AI methods, such as merging digital twins with voice-controlled large language models and visualisation techniques. Consequently, a current trend is Multimodal AI, combing 3D models, large or specific language models, biometric methods or similar models.
The use of AI brings ethical concerns that need our attention. To use AI-based systems, we need to trust that the systems are secure and provide authentic results and haven’t been hacked in any of its components. Bias in all forms is important to consider, and thus we need to understand and consider bias in AI systems as well as we do for conventional methods. Bias in training data is one example, as our human biases can be replicated by AI systems. The databases used for training mostly represent the world as it is and not the way we ideally want our society to be, e.g. concerning gender equality, explainability and fairness. This implies the need for training and education and to regularly assess and monitor AI systems for biases, so that we trust AI systems in the same way as for more established tools and instruments. Moreover, we need to address when computer-based methods should be validated on basis of the demographic properties of the populations relevant in the cases considered, instead of on basis of a representative distribution of these properties in the general society.
4. The Swedish collaborative network on digital forensics
In 2018 the cross-organisational network DFS was formed for exchange of digital forensic challenges and needs between academic, governmental, and industrial stakeholders with the mission to lay the ground for a strong national Swedish competence and capacity within digital forensics. The formation of DFS is sprung from the National Forensic Centre’s (NFC’s), Swedish Police, needs and challenges of keeping up with expertise and resources in the rapid digitalisation, realising that no organisation by its own alone has capability to keep up with the broad and evolving field of digital forensics. Consequently, the three authors of this paper plus a forensic expert at NFC, invoked their broad network with positive response. DFS is since then a network that gathers people and organisations who work for the good society by bringing together Swedish expertise on voluntary basis to exploit and evolve digital forensics.
The collaborative efforts within the DFS network now showcasing Swedish exchange of information and expertise among about sixty partners from government agencies, universities, research institutions, law enforcement, non-government organisations, industries, communities of interest and innovation hubs together to bolster the digital forensic capabilities. DFS also provides a forum to join forces and support commercialization around digital forensics tools and methods in the wake of the increasing digital threats, although the partners needs varies. The long-term goal is to form a national research centre that provides excellence and acts in a rapidly changing global context to meet the strain on our society brought by evolving crimes.
A survey was conducted during autumn 2021 to understand the DFS partners’ need and to prioritize actions . Twenty six partners were interviewed, Table 1, all being highly engaged and open minded. The questionnaire focused on five topics;
- 1.the partner’s daily operations, research and business needs, used for our understanding of the partners’ role,
- 2.what are the partner needs from DFS and their contribution to DFS,
- 3.what can be provided to DFS, such as resources, project management, key note speech etc., used for our understanding of the partners’ contribution to DFS,
- 4.observed trends in digital forensics, and
- 5.what’s needed to communicate to the political arena.
One thought on “AI in forensic sciences”