Welcome to the Machine: Putting Your Trust in AI

Nadine Graß
Share on facebook
Share on twitter
Share on xing
Share on linkedin

Data may be just a raw material. But Artificial Intelligence has breathed new life into how that raw material is being used to add value for businesses on the basis of algorithmic processing. The absence of human agency in this process means the trust we would typically place in a person we have interviewed, hired and built up a relationship with is now placed in machines, algorithms and data quality. No surprises, then, that AI is considered guilty until proven innocent when it comes to convincing those that need to rely on decisions made by Artificial Intelligence.

Is Your Intelligence Already Artificial?

The advantages of data and AI-driven solutions – improving accuracy, performance, resiliency and real-time adaptation – are plain to see for most tech-based businesses. However, even if your business is not directly reliant on Artificial Intelligence at present, chances are some neural networks will be working in the background of your peripheral activity. Alternatively, AI may well become part of your future value proposition – or is already directly impacting the work of service providers and partners. All the more reason to understand what it means to create responsible and trustworthy Artificial Intelligence.

Building Trust in AI Over Time

Trust is one of the most defining factors in human-to-human interactions, so proposing that machines should be trusted, is a claim that should not be taken lightly. We share a growing reliance on them to perform crucial tasks, and that means learning to trust systems based on AI is not something businesses can bypass in the long run.

Key Performance Indicators (KPIs) in businesses often reveal how we overlook the intangible factors, like trust, reliability and responsibility. Outcomes that drive businesses are still predominantly measured in terms of productivity and output. “How can we extract more out of this process?” or “What can be done to improve efficiency?” – These are among the primary parameters that drive business decisions. Even if factors concerning sustainability are becoming increasingly important, be they environmental or sourcing and working conditions related, these factors are all too often still an afterthought – or driven by externalities such as policies rather than by intrinsic interests.  

However, as new technology gets deployed at scale, and as project maturity will mean a growing number of stakeholders – and consequences – businesses are forced to reconsider the endgame: “Can our machines be trusted to not harm the environment?” “Does every user know what the system is doing?” “How can we ensure the quality of our results?”

In the end, building trust is not simply a matter of trust in the functionality of an AI-driven product but about having that product deliver results that are reliable and pose no risk to other humans that will interact with it outside of a business-first relationship.

Four Paramount Principles, Seven Relevant Requirements to Build Trust

As AI scales and becomes increasingly mainstream, the need to inform our continued use and further development of AI with guiding principles and standards becomes essential. As such, the debate around ethical or even potentially biased AI has garnered much attention in recent years. Actors including the independent High-Level Expert Group on Artificial Intelligence or the International Organization for Standardization (ISO) have taken position and published several papers in order to address the issue.

The following four guiding principles derived from the work of these organizations should be guaranteed in the employment of AI:

  • Respect for human autonomy – Creating an Artificial Intelligence based on a human-centered approach
  • Prevention of harm – Ensuring that no harm is done in and around the sphere of operations (namely, animals or the environment)
  • Fairness Treat all users the same way in creating one network for all
  • Explicability Being transparent and accessible means users will find it easy to understand the intentions and purpose of a system

In addition, there are seven requirements that should be addressed when carrying out a responsible AI project.

  • Human Agency and Oversight – Being aware of and communicating what a system is doing and what it is not doing (support only, for example)
  • Technical Robustness and Safety – Building a robust system to guarantee stable results and being aware of potential external manipulations
  • Privacy and Data Governance – Automatic anonymization of data
  • Transparency – Make results available in a manner that is clear
  • Diversity, Non-discrimination and Fairness – Create a system without bias
  • Societal and Environmental Well-being – Guarantee a system where all living beings are treated responsibly
  • Accountability – Point out all legal issues in a clear and comprehensible manner

What Do You Stand to Gain from Considering Trustworthiness?

Change can seem like it carries inherent risk. As an example, let’s consider supporting systems for structural inspections. Given rigid legal regulations and the room they enable with regards to safety protocols, it can be difficult to establish new methods and practices such as employing AI-based visual inspection. Whether you are working in a new field or optimizing an existing system, ensuring that all stakeholders are involved and have equal access to information, and are connected to your system on equal terms, will mean laying the foundation for successfully implementing change – one use case at a time.

The implementation and impact of artificial intelligence is still being considerably debated. Acting with responsibility – and in line with the above guidelines – at all levels, will help to remove uncertainty and ensure successful implementation of your system. Building trust can be a game changer. Maintaining an honest relationship with all involved stakeholders will improve your credibility and take your services to the next, more efficient – and also more trustworthy – level.


STRUCINSPECT is dedicated to addressing the topic of AI and human interaction in structural inspection head on, as a part of a new project, funded in part by the Austria Wirtschaftsservice initiative, AWS-Digitalization. Expanding on the existing AI network, working to automatically identify critical damages in civil engineering projects, this initiative will help lead the way towards a fully automated service for structural inspections.

If you are ready to start interacting and are on the lookout for additional support for your projects, STRUCINSPECT will happily keep you up to date with insights and information on topics relevant to your business. Alternatively, talk to an expert today to understand how AI-assisted digital inspections can improve your infrastructure asset management.

Read more about digital inspection

Mar 2022
A digital twin is a digital version of the physical thing that it represents. The technology, first used in the 1960s by NASA for its Apollo space program, creates virtual replicas of any physical asset, from nanomaterials to smart cities. This enables visualization, monitoring, and – perhaps most importantly – simulation of physical assets either before they are implemented or sold, or to maintain physical assets that are already in operation.
Aug 2021
STRUCINSPECT was founded in 2019 as a joint venture between manufacturer PALFINGER, engineering specialists VCE and the mobile mapping and photogrammetry experts of the ANGST Group.
Jul 2021
For centuries, engineers and architects have derived relevant information from 2D drawings. Although the design forms in the architect’s head, it becomes increasingly difficult to convey the designs effectively to others. Here, drawings become the main facilitator of information between the originator and other relevant parties.
May 2021
As of January 2021, the European Union Aviation Safety Agency (EASA) has implemented new regulations on drone management across the European Union (including all 27 EU-members, Iceland, Switzerland, Liechtenstein and Norway).
May 2021
Structural inspection, at periodic intervals, is of paramount (major) importance to ensure the integrity of structures: their functionality, safety, and longevity
May 2021
At STRUCINSPECT, we have developed an innovative, AI-based method to improve the status quo for structural inspection of civil engineering projectsstructures
May 2021
Artificial Intelligence (AI) seems to be making major advances in adoption regardless of industry, helping businesses to improve on their execution and their offering. So, what can AI deliver when it comes to structural inspection and how come we are suddenly seeing these terms being used together?
May 2021
We are extremely pleased to have our work in innovation and digitization recognized through our partnership with the Austrian Wirtschaftsservice, AWS.
May 2021
At STRUCINSPECT we are exited to bring in our digital knowledge in a very interesting historical landmark – the Mariazell Railways, the Mariazeller Bahnen.