About the future of intelligent threat detection

Marlon Possard

An interview with Marlon Possard – expert in artificial intelligence

Marlon Possard is an Assistant Professor, legal and administrative scholar, and philosopher. He teaches and conducts research as a habilitation candidate at the Department of Administration, Economics, Security and Politics (VWSP) and at the Research Center Administrative Sciences (RCAS) at the University of Applied Sciences Campus Vienna (HCW). He also teaches and conducts research at the Institute for Digital Transformation and Artificial Intelligence at the Faculty of Law of Sigmund Freud Private University in Vienna and Berlin (SFU), where he heads the Department of Ethics of Artificial Intelligence. In addition, he is a visiting researcher at Harvard University (USA). He is the author of numerous publications and contributions on questions of law, administration and ethics (140+).

Ahead of the Campus Lecture in Vienna, he took the time to answer our questions. 

Question:

Mr Possard, the ethics of artificial intelligence plays a central role in your areas of research. In your view, which criteria are crucial if AI-supported threat detection – as used today in modern control rooms – is to be deployed responsibly and in a legally sound manner?

Marlon Possard:

“The starting point is the principle of purpose limitation and proportionality. AI-supported threat detection must not operate on the principle of ‘as much data as possible’; rather, it must pursue clearly defined security purposes. Systems for detecting fire incidents, unauthorised access or escalating situations differ significantly, both legally and ethically, from blanket behavioural surveillance.

Secondly, transparency and traceability are needed. Operators must be able to explain the criteria according to which a system triggers an alarm, prioritises events or assesses risks. In safety-critical environments in particular, a ‘black box’ approach is not sufficient. Decisions must be auditable – both technically and organisationally.

Thirdly, human oversight is central. In control rooms, AI should primarily serve as decision support, not as an autonomously acting authority. The final assessment of a situation must remain with qualified human beings – especially where measures involve significant interference with fundamental rights.

Fourthly, data quality is a key factor. Biased or incomplete training data can lead to false alarms or discriminatory assessments. Such systems therefore require continuous validation, regular bias checks and clear quality standards.Finally, robust governance is required. This means clear responsibilities, documented processes, data protection impact assessments and independent checks. Legal certainty is not created by technology alone, but in particular through institutional and organisational embedding.”

Question:

Real-time detection systems can now identify fire, weapons or unusual movement patterns and automatically prioritise incidents. What scientific or legal challenges do you see when machine-based risk analyses become a central basis for decision-making?

Marlon Possard:

“The greatest challenge lies primarily in the shift of decision-making power. When systems not only identify risks but also actually prioritise them, they directly influence what people focus their attention on. This can deliver enormous efficiency gains, but it also entails the risk of what is known as ‘automation bias’. This can lead people to place too much trust in algorithmic assessments.

From a scientific perspective, the question therefore arises as to the validity of such systems under real-world conditions. A model may be highly accurate in the laboratory, but perform significantly less well under stress, in poor lighting conditions or in complex environments. Rare events in particular – for example armed attacks – are statistically difficult to model because high-quality training data is limited.

From a legal perspective, it is especially relevant to determine who bears responsibility if a system prioritises incorrectly or overlooks a danger. As automation increases, responsibilities between manufacturers, operators and users are also becoming progressively blurred. This is currently a central issue, particularly within the European legal framework.There is also the question of fundamental rights. Systems that analyse movement patterns or behavioural anomalies can quickly enter the realm of sensitive personality profiles. It must therefore be clearly regulated which data may be collected, how long it may be stored and under what conditions automated assessments are permissible at all.”

 Question:

Technologies for real-time monitoring and automated threat analysis often sit at the intersection of security interests, data protection and social acceptance. What is needed for people to perceive AI-supported security systems as legitimate and trustworthy?

Marlon Possard:

“Trust is not created by technical performance alone, but by comprehensible rules and fair application. People are more likely to accept security technologies if they genuinely understand the purpose they serve and where their limits lie.

Disclosure is therefore important from the outset. Operators should communicate transparently which data is processed, which risks are to be identified and which decisions are made automatically or by humans.

Secondly, the principle of data minimisation is required. Systems should be designed in such a way that they need as little personal data as possible. Many security applications, for instance, can be implemented using event or pattern recognition without creating comprehensive identity profiles.

Thirdly, accountability is crucial. If AI systems make mistakes, it must be clear how incidents are reviewed, corrected and documented. People are more likely to trust systems when effective control and complaints mechanisms are in place.

Finally, the aspect of social participation, which also plays an important role, must not be forgotten. Technologies that intervene deeply in public spaces should not be defined solely by manufacturers or authorities. The reason is obvious: acceptance arises where companies, academia, politics and civil society develop standards together.”

Question:

In your academic work, you analyse data-driven law enforcement and AI-supported policing. In your view, which methods or findings from this field could also be valuable for civil security applications, for example in companies or public institutions?

Marlon Possard:

“One important approach is risk-based prioritisation. Modern analysis systems can combine large volumes of heterogeneous data – I am thinking, for example, of sensors, access data or video streams – and evaluate incidents in context. For civil applications, this can help deploy resources in a more targeted manner and identify critical situations more quickly.

Equally relevant is the principle of a ‘human-in-the-loop’ architecture, as also required by the European Union’s Artificial Intelligence Act (AI Act). In safety-critical areas in particular, it has become clear that systems work especially reliably when AI provides preliminary analyses while the final assessment remains with trained human beings. This combination increases both efficiency and accountability.

Research into policing has also shown that technical systems must be evaluated continuously. Models change in performance over time (for example due to changing environments or new threat patterns). Regular audits, quality controls and adaptation mechanisms are therefore required.

Ultimately, interoperability is also a central issue, as security situations rarely arise in isolation. The added value of modern platforms often lies in intelligently integrating different data sources and security systems – albeit under clear data protection and access rules.”

Question:

You teach and conduct research on questions of state security, digitalisation and governance. In your view, how can a sensible balance be struck between automated decision support and human responsibility, particularly in operational situations where every second counts?

Marlon Possard:

“The decisive question is not ‘human or machine’, but how the two can work together effectively. My appeal is to see AI as a partner. In time-critical situations in particular, AI can relieve pressure on people by pre-sorting information, recognising patterns or prioritising warnings. This increases both speed and situational awareness.

At the same time, however, we must not delegate responsibility to systems that possess neither contextual understanding nor normative judgement. Human beings can assess uncertainties, make ethical evaluations and interpret exceptional situations. All of these capabilities are of enormous importance, especially in crisis situations.

In practical terms, this means that AI should make recommendations while leaving room for decision-making. Systems must be designed so that operators can understand, correct or override interventions. Good human-machine interfaces are therefore just as important as the quality of the algorithms themselves. Training and an effective organisational culture are also indispensable. Anyone working with AI-supported security systems must understand not only their functions, but also their limitations. Technological competence is therefore increasingly becoming part of professional security work.”

Question:

Thinking ahead to the next five to ten years: which developments in AI, ethics and critical event detection do you believe will have the greatest influence on solutions such as Critical Event Management, which Primion will present at the Campus Lecture on 20 May?

Marlon Possard:

“In this regard, I see three developments in particular, which I would like to outline.

First, AI will become significantly more context-sensitive. In future, systems will not only identify individual incidents, but will combine complex situational pictures from different sources (for example video analysis, access control, building sensors or communication data). This will create a much more precise real-time assessment of critical situations, which should be viewed positively.

Secondly, the topic of trustworthy AI will become strategically decisive, because regulatory requirements – particularly in Europe – will mean that traceability, documentation, auditability and human control become key competitive advantages. Companies developing security technology will in future have to offer not only powerful systems, but also explainable and governance-capable ones.

Thirdly, cyber-physical security will become more closely integrated. Critical events increasingly affect both physical and digital infrastructures. Modern security platforms will therefore have to think in more integrated terms: an attack on IT systems, building technology or access infrastructures will in future be regarded as a connected security situation.

In the long term, the success of such solutions will not be measured solely by how many incidents they detect, but by how responsibly they support decisions. In my view, the central challenge of the coming years will therefore be to bring technological performance into lasting alignment with rule-of-law and ethical principles. In short: for me, the best security solution is not the most autonomous one, but the one that meaningfully combines the strengths of human beings and AI.

Marlon Possard Featured
| News

About the future of intelligent threat detection

Herbert Henninger Featured
| News

Focus on Austria: Primion strengthens its commitment to the Austrian market 

P1063540
| News

Five Takeaways from Francis Cepero’s ASIS Europe 2026 Keynote

lynn Qn7dUULTZhs unsplash
| News

Your annual training won’t pass NIS2, here’s what auditors really want!

AdobeStock 1228365472
| News

People Perform At Their Best When Systems Work With Them, Not Against Them 

Happy Business Man iStock 2184295789
| News

Primion Technology GmbH acquires primion AG Switzerland

1 2 3