Message
|
12.11.2025
The use of artificial intelligence (AI) is also becoming increasingly important in public administration. This not only applies to administrative tasks that can be automated with the help of AI systems. Generative AI tools can also be used for legal issues and problems involving the state, municipalities or authorities. But is such „legal advice“ even permissible? Are cities and municipalities allowed to seek „legal advice“ from AI? And how reliable are the results generated by the AI chatbot?
In principle, the use of AI systems is also possible in public administration, subject to compliance with the legal requirements arising from the AI Regulation and data protection laws. However, cities and municipalities cannot hope for „legal advice“ from AI. According to the Legal Services Act (Section 3 (1) RDG), only persons or bodies with a corresponding licence may provide legal services. In addition to licensed lawyers, these include consumer advice centres and trade unions. „In addition, legal advice from AI is particularly error-prone and risky,“ says lawyer Zeynep Kenar. Unlike humans, AI does not respond on the basis of knowledge and experience, but on the basis of probabilities. The results are not always legally justifiable and sometimes even fictitious („hallucinated“). Legal laypersons often do not even notice these errors.
OpenAI has now also recognised these risks and has Utilisation guideline adapted as of 29 October 2025. „All people have a right to protection and security. This means that our services may not be used for: (...) personalised advice, e.g. of a legal nature, for which an authorisation or licence is required, without the presence of a suitably qualified person,“ the terms of use state.
In practice, this does not change much. ChatGPT continues to provide answers to legal questions - albeit with a disclaimer that this is „not legal advice“. The risk for cities and municipalities therefore remains, as AI applications cannot replace „real“ human legal advice.