Please note:
On this page you will only see the English-language presentations of the conference. You can find all conference sessions, including the German speaking ones, here.
The times given in the conference program of OOP 2024 correspond to Central European Time (CET).
By clicking on "VORTRAG MERKEN" within the lecture descriptions you can arrange your own schedule. You can view your schedule at any time using the icon in the upper right corner.
Track: Testing & Quality
- Dienstag
30.01. - Mittwoch
31.01.
While AI systems differ in some points from "traditional" systems, testing them does not have to be more difficult - knowing the right questions to ask will go a long way. In this talk we will:
- Arm you with a checklist of questions to ask when preparing to test an AI system
- Show you that testers and data scientist have common ground when testing AI systems
Keep calm and test on - AI systems are not that different from "normal" systems.
Target Audience: Testers, Data Scientists, Developers,…
Security engineering from TARA and security requirements to security testing demand mechanisms to generate, verify, and connect the resulting work products. Traditional methods need lots of manual work and yet show inconsistencies and imbalanced tests. Generative AI allows novel methods with semi-automatic cyber security requirements engineering, traceability, and testing. In this industry presentation, we show two promising approaches with NLP and transformers and how to embed them into an…
Many developers evoke technical debt to explain the misfortunes and troubles of their codebase and delivery. While unmanaged technical debt weighs down an architecture and exerts drag on its schedule, it is more often an effect than a cause. In this talk, we will look at what is and is not meant by technical debt with a view to properly attributing the root and recurring cause as technical neglect than technical debt. Without seeing technical neglect for what it is, we will continue to…
In the evolving AI landscape, the EU AI Act introduces new standards for assuring high-risk AI systems. This presentation will explore the tester's role in navigating these standards, drawing from the latest research and from our experiences with an Automatic Employment Decision System, a high-risk AI. We'll discuss emerging methodologies, conformity assessments, and post-deployment monitoring, offering insights and practical guidance for aligning AI systems with the Act's requirements.
Target…