A Local, Continental (African) and International Overview of the Law as it Relates (or Tries to Relate) to Artificial Intelligence (AI)
D van der Merwe*
PER/PELJ-Pioneer in peer-reviewed, open access online law publications
Author Daniel van der Merwe
Affiliation University of South Africa
Email vdmerdp@mweb.co.za
Date Submitted 24 April 2024
Date Revised 27 November 2024
Date Accepted 27 November 2024
Date Published 11 December 2024
Editor Prof T Mmusinyane
Journal Editor Prof W Erlank
How to cite this contribution
Van der Merwe D “A Local, Continental (African) and International Overview of the Law as it Relates (or Tries to Relate) to Artificial Intelligence (AI)" PER / PELJ 2024(27) - DOI http://dx.doi.org/10.17159/1727-3781/2024/v27i0a18498
Copyright
DOI http://dx.doi.org/10.17159/1727-3781/2024/v27i0a18498
Abstract
|
In this article |
---|
Keywords
Accusatorial system; African Union; Artificial Intelligence; civil procedure; comparative law; criminal law; criminal procedure; evidence; inquisitorial system; international bodies; jurisprudential concerns; privacy; Roman-Dutch law; science fiction; security.
……………………………………………………….
1 Introduction
Artificial Intelligence (AI) has burst upon the legal scene much faster than any other development in centuries of legal history. The present article endeavours to gauge its overall effect on the law, both in terms of quantum and of quality.
On a website entitled "SirBacon.org"
1
Daniel van der Merwe. B-Juris LLB LLD Cert Elec Law. (Retired) Professor, Law School, University of South Africa. E-mail: vdmerdp@mweb.co.za or danavandermerwe@gmail.com. ORCiD: https://orcid.org/0000-0001-8420-9853. 1 Based on the (not totally implausible) theory that Sir Francis Bacon actually wrote the so-called "Shakespeare" plays. SirBacon.org 2022 https://sirbacon.org/bacon-forum/index.php?/topic/193-ai-artificial-intelligence-shakespeare-and-francis-bacon/. 2 Connock and Stephen 2022 https://singularityhub.com/2022/06/19/ai-shakespeare-and-ai-oscar-wilde-debate-machine-creativity-at-oxford/.
Nay, nay I say! This cannot be!
That machines should e'er surpass our art.
We are the masters, they the slaves,
And thus it ever shall be so.
They learn, tis true, but they learn
Only what we bid them to learn, no more.
(quotation not complete)
Of course, had Shakespeare and Wilde been aware of the recent phenomenon of Artificial General Intelligence (AGI) the above pronouncement might have been less confident. AGI has been defined as "an AGI agent that is comparable or 'smarter' than a human".
3
3 Van der Merwe 2023 Obiter 957. As the title of the present article should convey, the author is hoping here to provide both a more comparative and philosophical approach.
When discussing AI and the Law one also has to keep in mind the distinction between the legal problems that AI might cause to the Law and the undeniable benefits that AI might correspondingly bring to the Law. In the
work Information and Communications Technology Law
4
4 Van der Merwe et al Information and Communications Technology Law 1-2.
In the latter area, a study is made of the ways in which the computer serves as an aid in the storage and retrieval of legal information, how it may be used to measure the extent and exercise of judicial discretion and how a computer-related field such as artificial intelligence might be used to help judicial officers in the exercise of their discretion.
5
5 Van der Merwe et al Information and Communications Technology Law 1-2. It is interesting to note the (almost prophetic) use of "artificial intelligence" as one of the best ways to illustrate the general concept of "legal informatics"!
The authors then continue to list some ways in which legal informatics are indispensable to a proper administration of the law.
6
6 Van der Merwe et al Information and Communications Technology Law 2ff.
2 A philosophical (religious?) view of the phenomenon of AI
Many young legal students at Afrikaans-speaking campuses have been exposed to the legal philosophy of (mainly Dutch) deep thinkers such as Dooyeweerd and Stoker. The present author still has in his personal library the two volumes entitled Oorsprong en Rigting
7
7 Origin and Direction (my own translation) (Stoker Oorsprong en Rigting). 8 Stoker Oorsprong en Rigting 11. 9 Van der Vyver and Van Zyl Introduction to Jurisprudence.
Perhaps because of the more naturalistic worldview of many scientists, the idea of an intelligent Creator seems to have fallen by the wayside. During the present explosion of interest in the world of AI, very few scientists in the latter field seem to have given any thought to a Divine role in the creation and maintenance of intelligent life on earth. The present author was therefore delighted to find a notable exception. This is constituted by a work
from Lennox entitled 2084: Artificial Intelligence and the Future of Humanity.
10
10 Namely Lennox 2084.
In Chapter Four of this work, enticingly entitled "Narrow Artificial Intelligence: The Future is Bright?", Lennox dismisses the impression "that AI is only concerned with speculative and scary ideas whose implementation is just around the corner".
11
11 Lennox 2084 53. 12 Lennox 2084 54-60.
However, In Chapter Five the same author devotes some attention to the (often justified) fears that AI might also inspire amongst prospective users. In a chapter more cautiously entitled "Narrow AI: Perhaps the Future is not so Bright After All?", Lennox discusses the problems that resulting job losses might create in the labour market. He feels that many of these fears might be overblown - even as some jobs are lost, new ones will be created continually:
Think, for example, of the consequences of the invention of the wheelbarrow, the steam engine, or the electric motor and the automobile.
13
13 Lennox 2084 64.
Uppermost in the minds of most citizens will probably be the threat to individual privacy that governmental (or even commercial) supervision by means of AI might bring about. In this regard it is worth noting that Lennox has purposively titled his work 2084: Artificial Intelligence and the Future of Humanity, no doubt in order to juxtapose this with the classic work entitled 1984
14
14 Orwell 1984.
Chapter Six, entitled "Upgrading Humans", bristles with even scarier examples. The present author has never seen this so eloquently expressed
as by CS Lewis in his work The Abolition of Man.
15
15 Lewis Abolition of Man.
What we call Man's power over Nature turns out to be a power exercised by some men over other men, with Nature as its instrument.
16
16 Lewis Abolition of Man 55 (quoted by Lennox).
And also:
In every victory, besides being the general who triumphs, he is also the prisoner who follows the triumphal car.
17
17 Lewis Abolition of Man 58 (quoted by Lennox).
Lennox then cites the well-known examples of Hitler and Stalin during the Second World War. The present author finds it interesting that one might also think of the more positive examples set by their counterparts in the United Kingdom, especially during the so-called "Battle of Britain". With his stirring speeches the then Prime Minister of the United Kingdom during the years of the Second World War,
18
18 1939 to 1945. 19 1899 to 1902, when fighting for the Boers against the British. 20 1914 to 1918, when fighting for Britain and its allies (including South Africa) against Germany and its allies.
3 A summation of the philosophical aspects of AI and its importance to the Law
An interesting aspect of AI is the fear it inspires amongst some of us. This also leads to earnest seekers after the truth in this regard being regarded as "suspicious". To the present author this reaction seems a reminder of the truth of the old saying: "The pioneers are the guys with the arrows in their backs!" A more accurate truth might lie in the even older saying of "Cogito ergo sum!"
21
21 "I think, therefore I am" from Descartes in his 1637 work Discourse on Method. We are now confronted with disembodied "thinkers" and will simply have to design new rules to deal with this phenomenon.
In the past there have been examples of new (mostly mechanical) "inventions" reshaping the law in this regard. Thus the invention of the printing press and movable typesetting has given us the wonderful world of Copyright.
22
22 See Van der Merwe 1998 SALJ 180-201; Van der Merwe 1999 International Review of Law, Computers and Technology 303-315; Pistorius 2006 PELJ 1-27. 23 An English word of which the direct Latin translation would be "written by hand"!
One might safely say that AI also has to potential to reshape the law, except that it is likely to be a much more rapid as well as a more radical change, leading one to wonder whether certain fields of law will even be able to survive such change! An interesting perspective in this regard has been published in a work entitled Research Handbook on the Law of Artificial Intelligence.
24
24 Barfield and Pagallo Research Handbook on the Law of Artificial Intelligence. 25 Barfield "Towards a Law of Artificial Intelligence" 2. 26 Barfield "Towards a Law of Artificial Intelligence" 2.
As long as we used robots or robotic appliances only performing their functions in a series of steps, pre-determined by the programmer, it is obvious that the latter (or his or her employer) is the proper person to be held liable. But what if an AI programme was written not by a human, but by an algorithm? Barfield points out that the "foreseeability"-test, which has been "a key ingredient in negligence",
27
27 Barfield "Towards a Law of Artificial Intelligence" 4. 28 Barfield "Towards a Law of Artificial Intelligence" 8. 29 Barfield "Towards a Law of Artificial Intelligence" 12.
To return to the main theme of this section of my article, will such an avatar have sufficient moral insight to be able to make ethically correct judgments? The short answer to this sounds like an over-simplification: "If we train it to do so." It is interesting to note that not only religious believers are of this opinion. The "leading German atheist intellectual" Jürgen Habermas also believes that this is a necessary adjunct to any moral development:
30
30 As cited (and categorised) by Lennox 2084 142.
Universal egalitarianism, from which sprang the ideals of freedom and a collective life in solidarity, the autonomous conduct of life and emancipation, the individual morality of conscience, human rights and democracy, is the direct legacy of the Judaic ethic of Justice and the Christian ethic of love. This legacy, substantially unchanged, has been the object of continual critical appropriation and reinterpretation. To this day, there is no alternative to it. And in light of the current challenges of a post-national constellation, we continue to draw on this heritage. Everything else is just idle postmodern talk.
To my mind the religious thinker Lennox brings a measured, objective answer to many AI-driven fears,
31
31 Lennox 2084 144-146.
(M)y commitment to the biblical world view, far from turning me into a Luddite vis-à-vis technology, makes me deeply thankful to God for developments that bring hope to people in this damaged world who would otherwise have none.
The present author feels this to be an accurate summation of this philosophical section.
In part B of this paper it will be shown how data controls such those provided by EDGAR and XBRL guarantee reliability to the data that need to be used in the context of financial reporting and regulatory compliance.
4 A comparative perspective - AI in the broader African context
The African continent has not always been one of the easiest to evaluate in a AI context. From a phrase credited to Pliny the Elder
32
32 AD 23-79. 33 "From Africa there is always something new". 34 The number "419" is the relevant section of the Nigerian Criminal Code dealing with this type of fraud.
With specific reference to the law as it relates to AI in Africa, a very useful overview has been provided in an online presentation by Brand entitled Artificial Intelligence and the Law: An African Perspective.
35
35 Brand 2024 https://law.mpg.de/event/artificial-intelligence-and-the-law-an-african-perspective/.
In a slide entitled "Ethical AI" he discusses the 2018 "Universal Guidelines for AI". These should emphasise the right to transparency as well as the right to human determination. In addition, the guidelines should impose obligations for fairness, accuracy and reliability, all factors leading to validity, and also an obligation to perform proper assessment and to assume full accountability. Against these Guidelines, Brand juxtaposes the OECD
36
36 Organisation for Economic Co-operation and Development. It has developed specialised groups such as OECD.AI, the OECD Framework for the Classification of AI Systems, the OECD.AI Network of Experts, the OECD.AI Catalogue of Tools and Metrics for Trustworthy AI and the OECD AI Incidents Monitor. 37 Probably with regard to all six principles.
It is interesting to note that there is a fair amount of overlap between the Universal Guidelines, the OECD-principles and the principles put forward by Brand. As a lawyer specialising in Criminal Law and Evidence, however, the present author would have liked to see some more principles linked specifically to these particular branches of the law. The groundwork in this regard has been done already by means of a Bucharest Declaration,
38
38 EU Treaty on Cybercrime (ETS No 185) opened at Budapest for signature by EU member states from 23 November 2001.
Under the heading, "AI policy developments in Africa" Brand then analyses policy principles and historical developments in several such systems, specifically pertaining to Africa. He states that Mauritius, for instance, progressed to "Regulation on robotic and AI based services" in 2020. Kenya
has a 2023 "Proposal for a Robotics and AI Society Bill". It is interesting to note that the small but progressive country of Rwanda seems determined to position itself as "Africa's AI Lab and Responsible AI Champion". It hopes to do so by involving both the private and public sectors and by promoting AI literacy and ethical AI guidelines. It is also of interest to note that Egypt adopted a "Charter for Responsible AI" in February 2023 and that this was "built on OECD AI principles and UNESCO
39
39 United Nations Educational Scientific and Cultural Organisation. Besides this, the United Nations itself has recently unveiled a "Zero Draft Global Digital Compact". This serves as a foundational framework for international cooperation and governance on AI and other aspects of the digital realm. 40 Readers generally interested in the extra-continental scene are referred to paragraph 5 below, where the role of multi-national bodies is further explored, with lots of acronyms thrown in as a special treat!
Finally the attention of readers should also be drawn to a draft document from South Africa's Department of Communications and Digital Technologies (DCDT) and the Artificial Intelligence Institute of South Africa (AIISA).
41
41 See Mashishi 2023 https://www.dcdt.gov.za/images/phocadownload/ AI_Government_Summit/National_AI_Government_Summit_Discussion_Document.pdf. Mention is also made of the establishment of a Centre for Artificial Intelligence Research (CAIR) by the Department of Science and Innovation (DSI).
Upon first reading, the document gives the impression of having been prepared in haste and of not having been properly subjected to proper proofreading. Unfortunately, this criticism is not limited to form and style but also to its contents. This is borne out by a contribution to Businesstech on the Internet entitled South Africa’s Proposed AI Plan Needs a Rework: Experts.
42
42 See Thorne 2024 https://businesstech.co.za/news/government/768147/south-africas-proposed-ai-plan-needs-a-rework-experts/.
of Werksmans, the document gives the impression of simply being "a rough draft":
it is repetitive, has conflicting provisions and is not sufficiently advanced, specific or practical in clarifying and setting a clear policy approach and informed plan to offer meaningful guidance on the way forward.
43
43 Thorne 2024 https://businesstech.co.za/news/government/768147/south-africas-proposed-ai-plan-needs-a-rework-experts/.
This view is supported by L Pierce at PPM Attorneys, who calls the draft document "convoluted, [and] complicated" and also says that it "lacks clear deliverables and fails to allocate responsibility for their delivery". He also criticises the draft for introducing "unrealistic" timelines for the implementation of AI regulation and the accompanying infrastructure projects. His conclusion comes down to "back to the drawing board":
My view is that, rather than stakeholders commenting on this highly flawed Draft AI Plan, it should be completely reworked and released when it is in a more practical and improved form.
44
44 Thorne 2024 https://businesstech.co.za/news/government/768147/south-africas-proposed-ai-plan-needs-a-rework-experts/.
In order to call upon some AI guidance on this subject, the present author presented "Claude"
45
45 An Artificial Intelligence system that the present author has found to be surprisingly objective and quite knowledgeable on almost any field.
I have found that the CIPC (Commission on Intellectual Property and Companies) regularly update regularly update XBRL for South Africa. Is there a link between AI and XBRL?
I have found the answer quite stimulating and in tune with some of my own thoughts on the subject. Claude opines that "the structured nature of XBRL data makes it particularly well-suited for AI applications". He (it?) then promptly provides a number of fields where this is possible. These are "Data Analysis and Insights", "Automated Reporting", "Regulatory Compliance" (especially encouraging for a lawyer!), "Enhanced Data Quality", "Intelligent Search and Retrieval", "Predictive Analytics", "Natural Language Generation", "Continuous Auditing", "Taxonomy Development" and "Cross-lingual Financial Analysis". On each of these Claude gives practical, legal illustrations that, to the present author at least, provide some fascinating insights into future co-operation between these two systems.
5 AI in the European Union (EU)
Thus far the EU has proven to contain the most fertile soil for the growth of AI legislative innovation. This includes the all-absorbing function of agreeing upon and adopting legislation that is of general application to all EU member
states. Recently such legislation has seen the light in the shape of the European Union Artificial Intelligence Act.
46
46 After first publication on 21 April 2021 and following upon many discussions and "trilogues" the final provisional agreement was published on 26 January 2024 and a Final Provisional Text was passed by the EU Parliament on 13 March 2024. See EU 2024 https://artificialintelligenceact.eu/the-act/ (the EU AI Act).
A concise introduction to this Act may be found in a brief web-site article entitled The European Union's Artificial Intelligence (AI) Act in a Nutshell.
47
47 See Brand 2023 https://www.swart.law/post.aspx?id=107.
On the same website the author also mentions that a system of "guardrails" is to be followed when using "high-risk AI-stems and foundational models". For example, using biometric identification systems for categorising natural persons according to sensitive characteristics is forbidden. Here one would immediately think of sensitive racial features which might lead to discrimination.
48
48 Interestingly enough, while such characteristics might lead to negative results in countries such as the United States of America, in South Africa policies such as "affirmative action" and "employment equity" lead to favourable results for persons, such as Blacks, previously discriminated against.
Brand then quotes the definition of AI contained in this Act:
a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.
49
49 Brand 2023 https://www.swart.law/post.aspx?id=107.
He also cites some ethical principles to be followed by AI systems, namely:
The author then ends off with a warning of special measures to be adopted when so-called "high-risk AI systems" are used. These will be discussed at the hand of another article cited in the next few paragraphs. He also issues
a timely warning that it is still an open question as to how adaptable the new AI Act will be in dealing with the slew of brand-new technological developments.
An illuminating discussion of the impact of the Act on fundamental human rights as it now reads and in its further phases of development may also be found in another article entitled Fundamental Rights Impact Assessments under the EU AI Act: Who, What and How?
50
50 Waem, Dauzier and Demircan 2024 https://www.technologyslegaledge.com/2024/ 03/fundamental-rights-impact-assessments-under-the-eu-ai-act-who-what-and-how/.
The authors expose us to a fascinating new set of acronyms such as a "FRIA" (a Fundamental Rights Impact Assessment). Performing this assessment is a statutory duty that will be imposed on "certain deployers and providers of high-risk AI systems" in terms of Articles 6(2) and 29a as well Annexure III of the Final Provisional Text. The authors warn that:
conducting FRIAs will be a challenge and not all deployers of high-risk AI systems will be equipped to fully assess the risks of the high-risk AI system deployed.
51
51 Waem, Dauzier and Demircan 2024 https://www.technologyslegaledge.com/2024/ 03/fundamental-rights-impact-assessments-under-the-eu-ai-act-who-what-and-how/.
They also warn that it is not yet clear which fundamental rights might be affected by a FRIA. The language spoken by "techies" installing and deploying the high-risk systems unfortunately does not (yet) speak the language of lawyers holding forth on a system of fundamental rights that need dedicated legal protection. Article 29a.1 includes a list of details that FRIAs should contain. These include descriptions of the deployer's processes, the period and frequency in which these AI systems will be used, the categories of persons/groups likely to be affected, the specific risks involved and the implementation of "human oversight measures", as well as the necessary remedial measures should any of these risks materialise. It is also of interest to note that the Final Provisional Text states that if obligations in terms of the FRIAs have already been met through prior data protection impact assessments (DPIA), "the FRIA shall be conducted in conjunction with the DPIA".
52
52 Article 29a.4 of the EU AI Act.
Moving away from all the acronyms, some attention needs to be devoted to reviewing the all-important Fundamental Rights Charter. This consists of six chapters dealing with the fundamental rights on dignity, freedoms, equality, solidarity, citizen's rights and justice. In this regard the authors warn as follows:
Taking into account the fact that there is extensive case-law from the Court of Justice of the EU and the European Court of Human Rights with regard to each of these rights, it requires an in-depth knowledge to assess the impact of high-risk AI systems on these rights.
53
53 Waem, Dauzier and Demircan 2024 https://www.technologyslegaledge.com/ 2024/03/fundamental-rights-impact-assessments-under-the-eu-ai-act-who-what-and-how/.
After pointing out some practical problems that the marriage between the two systems might entail, the authors are sceptical as to whether "deployers will be capable of 'selecting' the relevant fundamental rights". They emphasise the need for perspicacious governance as well as the need to make full use of existing best practices. An analysis of similar systems world-wide shows that Canada's "Algorithmic Impact Assessment tool", the EU-oriented ALIGNER and the Dutch "Fundamental Rights and Algorithms Impact Assessment Template (FRAIA)" all show promise in this regard.
An interesting question that might arise once again is whether the criminal law aspects of AI in the EU have been addressed sufficiently. A document that shows some promise in this regard is the Report on Artificial Intelligence in Criminal Law and Its Use by the Police and Judicial Authorities in Criminal Matters.
54
54 Vitanov 2021 https://www.europarl.europa.eu/doceo/document/A-9-2021-0232_EN. html. 55 Vitanov 2021 https://www.europarl.europa.eu/doceo/document/A-9-2021-0232_EN. html 25ff.
The report also restates the importance of the right to a fair trial "throughout the entirety of criminal proceedings, including in law enforcement". This means that the rights of the defence such as those to an "independent court", to "equality before the law" and to the "presumption of innocence" should be upheld and emphasised at all times. This would be the case particularly when AI is being used. Specific attention should be paid to the protection of personal data and to the fact that AI and its related technologies, despite their "self-learning abilities", still require some human intervention. Vigilance is to be exercised not to transgress upon intellectual property rights.
Some positives are finally recognised in the use of AI in criminal law, including its use in creating statistical databases as far as criminal behaviour is concerned. However, these databases should be "anonymised" to protect data and personal privacy. The protection of fundamental rights and freedoms should be paramount throughout the whole process of investigation. This can be ensured only by the "human-in-command" principle and (human) verification of "AI-produced or AI-assisted" outputs.
Cautionary rules should also be applied before accepting "biometric recognition software".
Whereas one would like to see some of these principles embodied in any system of criminal law, evidence and procedure, they have obviously been developed against a background of a continent mainly making use of the inquisitorial system of the investigation and trial of a potential criminal. One cannot but wonder how these principles would translate in countries such as the United Kingdom and the USA, where an accusatorial (jury-based) system of criminal procedure is followed. For historical reasons South Africa would also fall in the latter group. These matters will be discussed under the following heading.
6 AI in countries making use of the accusatorial system of criminal procedure
In the so-called "common-law countries"
56
56 For instance, the USA and the UK. 57 Sometimes former use, as in South Africa. Nonetheless, this country has retained the attendant rules on the presentation of evidence as well as on cross-examination.
Although less dramatic, the situation in civil cases in these countries is comparable as far as procedure and evidence is concerned. The goalposts have now shifted significantly in that only proof "on a preponderance of probabilities" is needed, as against the proof "beyond reasonable doubt" required in criminal cases.
58
58 For greater detail as to the incidence and quantum of proof in criminal and civil cases, the reader is referred to "Chapter 3 - The Onus of Proof" in Zeffertt, Paizes and Skeen South African Law of Evidence.
To a jurist equipped to deal with scenarios of this type, the well-intentioned but vague prescriptions of the EU-proposals are not sufficiently detailed.
59
59 This criticism also applies to international instruments such as those formulated by the AU, the United Nations and other multi-national organisations - see under the next heading as well as footnotes 16-19 above.
was also a script-writing robot? Perhaps one should rather go where the money really is, and address the company making use of these instruments. Both the UK and the USA have not shown themselves to be in any particular hurry to adopt comprehensive AI-related legislation. In the United States President Biden issued an Executive Order on Safe, Secure and Trustworthy Artificial Intelligence on the 30th of October 2023. Its stated composite goal should be that it:
establishes new standards for AI safety and security, protects Americans' privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovations and competition, advances American leadership around the world, and more.
60
60 White House 2023 https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.
Seemingly ominous is that the US government appears to see itself as an all-powerful "net nanny" in this regard. Developers of powerful AI systems have to "share their safety test results and other critical information with the U.S. government" in terms of their Defense Production Act. The Departments of Energy and Homeland Security will assess AI system threats to critical infrastructure and "chemical, biological, radiological, nuclear and cybersecurity risks".
61
61 Directly quoted from White House 2023 https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/. 62 Probably on account of race. 63 White House 2023 https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-
secure-and-trustworthy-artificial-intelligence/. It is perhaps disconcerting to note that Nigeria is included in this list, but that South Africa seems not to have made the grade.
The UK has also been careful in adopting general AI-related legislation. In an article entitled UK Rules Out New AI Regulator
64
64 MacCallum 2023 https://www.bbc.com/news/technology-65102210.amp.
A white paper from their Department of Science, Innovation and Technology has proposed some rules for the use of these new technologies.
65
65 BCS 2022 https://www.bcs.org/articles-opinion-and-research/light-touch-approach-to-ai-regulation-welcomed-by-it-industry-body/,
This approach has proudly been described as "a light touch" by the UK Government and some commentators, but has equally had its detractors.
66
66 See Grayling 2023 https://grayling.com/news-and-views/balancing-innovation-and-responsibility-examining-the-uks-light-touch-approach-to-ai-regulation/. 67 See Manancourt 2024 https://www.politico.eu/article/rishi-sunak-ai-us-eu-forge-britian-london-chatgpt/.
Slightly disappointed with the above results, the present author again decided to ask the AI programme "Claude" for some thoughts on the subject:
For most of my life I have been teaching the Law of Evidence and I was wondering if any of these latest developments would be helpful in this regard?
Once again I was surprised at the quality of knowledge instantaneously provided by this AI system! Claude was of the opinion that "the intersection of XBRL, AI and the Law of Evidence could have significant implications".
The main headings to the aspects that he mentioned were "Digital Evidence Authentication", "Pattern Recognition in Financial Crimes", "Expert Systems for Financial Analysis", a "Chain of Custody for Digital Financial Records", "Automated Compliance Checking", "Enhanced Discovery Process", "Demonstrative Evidence", "Cross-Border Evidence Gathering", "Predictive Analytics in Civil Litigation" and "Blockchain Integration". With regard to the latter new phenomenon Claude was of the following opinion:
Combining XBRL with blockchain technology could provide immutable records of financial transactions, potentially changing how financial evidence is verified and presented in court.
The system also warned of some legal questions that might arise in this regard. These relate to evaluating the reliability of AI-generated analysis of XBRL data, the standards to be applied in admitting AI-processed financial evidence, and how these technologies might impact upon cross-examination.
7 AI and international instruments
The UN General Assembly has recently adopted a resolution on AI.
68
68 Jarovsky 2024 https://www.linkedin.com/pulse/un-adopts-first-global-resolution-ai-luiza-jarovsky-8mzrc.
The goals are as follows: "No poverty, Zero hunger, Good health and well-being, Quality Education, Gender equality, Clean water and sanitation, Affordable and clean energy, Decent work and economic growth, Industry, innovation and infrastructure, and Reduced inequalities".
69
69 Jarovsky 2024 https://www.linkedin.com/pulse/un-adopts-first-global-resolution-ai-luiza-jarovsky-8mzrc.
However, simply presenting such almost impossible goals to a country such as present-day South Africa will not bring much practical relief. Escom is still trying to protect the country from regular
70
70 Sometimes unexpected and therefore not so regular and also lasting much longer than the usual two or four hours.
at risk because of the present government's emphasis on coal-fired power stations. "Peace, justice and strong institutions" have all been marred by a number of senior incumbents (such as judges) having to be removed because of their corrupt activities or connections.
On the international front the present author finds it heartening that UNESCO finally adopted a Recommendation on the Ethics of Artificial Intelligence in 2024.
71
71 UNESCO 2022 https://unesdoc.unesco.org/ark:/48223/pf0000381137.
To turn to some other international organisations and their views on AI, in this regard a helpful academic article titled "Towards Drafting Artificial Intelligence (AI) Legislation in South Africa" has recently been published.
72
72 Snail and Morige 2024 Obiter 161-179.
plethora of ethical risks and dilemma's and multiple scandal such as the infringement of laws and rights, as well as racial and gender discrimination.
73
73 Snail and Morige 2024 Obiter 162.
The last-mentioned factor is a very real risk with machines "trained" on only one racial grouping. This will be further discussed below.
74
74 See section 8 "Conclusion" below.
As far as international bodies are concerned, the authors cited then embark on a very useful discussion of the recommendations by the OECD and the recommendation by its Ministerial Council on 22 May 2019. These are to:
encourage trust and innovation in AI by encouraging the responsible stewardship of trustworthy AI, while safeguarding human rights and democratic values as well as existing OECD standards such as privacy, risk management, and responsible business conduct.
75
75 As quoted in Snail and Morige 2024 Obiter 164.
In doing so Snail and Morige cite the following five "core value-based principles" that should be striven for, namely "Accountability", "Transparency and explainability", "Robustness, security and safety", "Human-centred values and fairness" and "Inclusive growth, sustainable development and well-being". In order for these noble goals to be fulfilled, the OECD also recommends that governments should concentrate on "building human capacity and preparing for labour market transformation", should "shape an enabling policy environment for AI", should "foster a digital ecosystem for AI," should also "enhance international cooperation for trustworthy AI" and should invest in "AI research and development".
76
76 Snail and Morige 2024 Obiter 164-165.
Hopefully, without becoming over-easily defined as a pessimist or a cynic, the present author doubts whether all these fine-sounding goals are practically achievable without much more detailed planning, greater self-sacrifice in order to achieve a more general well-being, and the required political insight and will. South Africa's present experience of corruption and many other crimes, often perpetrated by senior political figures, would seem to indicate that these ideals are not a first priority here. Even though terms such as "security", "safety" and "trustworthy" are easily used, these noble goals might prove to be elusive in our generally crime-ridden world. Even before the arrival of AI computer crime was a "growth industry" difficult to contain, and addressing this phenomenon needs careful planning. Not to close the argument on this aspect on too negative a note, AI might also prove to be a useful "policeman" helping to curb this pernicious practice. Also there is some hope for flexibility in upcoming developments, as the following paragraph will hope to illustrate.
In this regard Snail and Morige also mention the G7 Hiroshima Summit, which was held "to promote safe, secure, and trustworthy AI globally". According to the summit a set of guiding principles is to be developed with the goal of assisting "the uptake of the benefits of these new technologies as (to) address the risks and challenges they bring".
77
77 Snail and Morige 2024 Obiter 175. 78 Snail and Morige 2024 Obiter 175.
The Hiroshima Process suggests that different jurisdictions may take their own approach in implementing these guiding principles. While governments develop more detailed governance and regulatory approaches, it is important for organisations to follow these actions in consultation with other relevant stakeholders.
79
79 Snail and Morige 2024 Obiter 175.
8 Conclusion
Some useful information has hopefully been imparted and some warning signs highlighted in all of the foregoing. AI appears to the present author to bear a close analogy to dangerous but necessary, developments such as firearms, the motorcar (and motorcycle!), railways, nuclear power-stations and the like. Mankind cannot retrace its steps and "uninvent" or "forbid" these developments. However, it should let these loose upon an unsuspecting public only once a proper balance between the (often competing) claims of privacy and security have been legislated for.
The present author finds that academic (as well as lay) opinion is sharply divided as to the merits (or the dangers) of artificial intelligence. This dichotomy also extends to the legal world, a field that is supposed to provide protection and guidance to its many subjects. In order to help answer the question set in the introduction to this article as to the law's protective role in this regard, the present article has endeavoured to provide a philosophical as well as a comparative perspective. As has been stated above, one could tender both a short and a slightly longer but more motivated answer to the question posed. The short answer is simply "If we train it to do so". The slightly longer and more motivated answer is:
if data controls such as those provided by EDGAR and XBRL guarantee reliability to the data that needs to be used in the context of financial reports and regulatory compliance.
In other words, and to continue the metaphor used in the opening paragraph of this section, if one feeds diesel into a petrol-powered car or motorcycle, the outcome will be disastrous. On the other hand, if one can obtain a higher octane petroleum with its refinement and quality well controlled and feed that into a high-performance petrol-powered machine the results will probably be quite gratifying.
In addition to finding the above balance, the legal field is faced with a particular conundrum in comparative law. As mentioned above, many countries (for instance, those in continental Europe) have an inquisitorial system of court procedure where the judge forms part of the team trying to get at the whole truth of the matter and where almost all evidence is admissible. By contrast, other countries such as the USA, the UK and even South Africa (as a former British colony), have inherited an accusatorial system of court procedure. The latter system has developed a compendious system of (mostly) judge-made law governing the admissibility of evidence and, once it is admitted, the degree of evidential weight that should be attached to such evidence. For this reason an AI legal system developed for continental Europe or for Francophone Africa would not necessarily be practical for the USA, the UK and most former members of the world-wide British empire, including those in Africa. This presents a problem for a truly international phenomenon such as AI.
Another danger that South African courts have already had an unfortunate experience with was the AI-programme ChatGPT's "inventing" some bogus cases in order to strengthen its master's argument in court.
80
80 For this case see Van der Merwe 2023 Obiter 939-959 as well as the same case cited by Snail and Morige 2024 Obiter as Parker v Forsyth (1585/20) [2023] ZAGPRD 1 (29 June 2023) paras 85-86.
A further problem greatly troubling the present author is the fact that very few computer security experts have spent a period of time on legal training and that not many information technology (IT) lawyers (such as myself) have spent much time on the very specialised field of information technology itself. Thus we have three entirely separate fields of expertise, with each expert required to play an important role in dealing with the explosive arrival of AI. One future solution might be to have an experienced magistrate or judge preside in a court while being assisted by two assessors, the first having advanced IT security training and the second being skilled in the field of IT, especially in the sub-domain of AI and all its technical implications. It may also be asked to what extent AI itself is likely to take over the role of IT-experts and lawyers respectively. In an article entitled Jobs Most Likely to Be Affected by AI
81
81 Kahn 2024 https://www.superhuman.ai/p/ai-gadgets?utm-source=. 82 Such as I am!
Another factor that has not often been canvassed is the tremendous expense involved in building the required expert systems for any field of human endeavour, including the three mentioned in the previous paragraph. Even more worrying, especially for South Africa, is the ever-increasing demand for electricity that the increased use of expert systems is likely to bring about. With South Africa's Escom barely able to keep the lights on for twenty-four hours, the arrival of AI may see this country falling even further behind in the global race to properly implement change so as to enter the Fourth Industrial Revolution in a proper fashion. South Africa also has a unique problem in the need to "re-educate" its AI-systems to ignore any possible racial bias in the "training" that has been invested in them.
The author does not wish to end this article on too negative a note. Upon reading the final paragraphs above the reader might be reminded of the ancient Chinese saying: "May you not be cursed to live in interesting times"! Artificial Intelligence, with all its concomitant advantages, still remains a major step ahead for mankind, allied with intelligent machines. Any country that refuses these new developments will only land in an international backwater and will not be able to provide work for its untrained and unaware citizens. Fortunately the youth of South Africa has grown up with computer
and cellular technology and should be able to conceptualise and create many labour opportunities as yet unthought of by their older peers.
In the end, perhaps we should enlist the services of a fully-trained "avatar" to aid our human authors, both as far as creation and as far as safeguards for our data are concerned.
Bibliography
Literature
Barfield "Towards a Law of Artificial Intelligence"
Barfield W "Towards a Law of Artificial Intelligence" in Barfield W and Pagallo U Research Handbook on the Law of Artificial Intelligence (Edward Elgar Cheltenham 2018) 2-39
Barfield and Pagallo Research Handbook on the Law of Artificial Intelligence
Barfield W and Pagallo U Research Handbook on the Law of Artificial Intelligence (Edward Elgar Cheltenham 2018)
Descartes Discourse on Method
Descartes R Discourse on the Method of Rightly Conducting One's Reason and of Seeking Truth in the Sciences (1637) (Veitch J translation, Sutherland and Knox Edinburgh 1853)
Lennox 2084
Lennox JC 2084: Artificial Intelligence and the Future of Humanity (Zondervan Grand Rapids 2020)
Lewis Abolition of Man
Lewis C The Abolition of Man (HarperOne San Francisco 1943 (2001 reprint))
Orwell 1984
Orwell G 1984 (Arcturus London 2001)
Pistorius 2006 PELJ
Pistorius T "Developing Countries and Copyright in the Information Age: The Functional Equivalent Implementation of the WCT" 2006 PELJ 1-27
Snail and Morige 2024 Obiter
Snail S and Morige M "Towards Drafting Artificial Intelligence (AI) Legislation in South Africa" 2024 Obiter 161-179
Stoker Oorsprong en Rigting
Stoker HG Oorsprong en Rigting (Tafelberg Cape Town 1967)
Van der Merwe 1998 SALJ
Van der Merwe DP "Copyright and Computers, with Special Reference to the Internet: 'From Penmanship to Peepshow"' 1998 SALJ 180-201
Van der Merwe 1999 International Review of Law, Computers and Technology
Van der Merwe DP "The Dematerialisation of Print and the Fate of Copyright" 1999 International Review of Law, Computers and Technology 303-315
Van der Merwe 2023 Obiter
Van der Merwe DP "Legal Aspects of the Fourth Industrial Revolution (4iR) (with Specific Reference to ChatGPT and Other Software Purporting to Give Legal Advice" 2023 Obiter 939-959
Van der Merwe et al Information and Communications Technology Law
Van der Merwe DP et al Information and Communications Technology Law 3rd ed (LexisNexis Durban 2022)
Van der Vyver and Van Zyl Introduction to Jurisprudence
Van der Vyver JD and Van Zyl FJ Introduction to Jurisprudence (Butterworths Durban 1972)
Zeffertt, Paizes and Skeen South African Law of Evidence
Zeffertt DT, Paizes AP and Skeen A St Q The South African Law of Evidence (LexisNexis Durban 2003)
Case law
Parker v Forsyth (1585/20) [2023] ZAGPRD 1 (29 June 2023)
Legislation
Nigeria
Nigerian Criminal Code, Chapter 77 of the Laws of the Federation of Nigeria, 1990
United States of America
Defense Production Act, 1950
International instruments
EU Treaty on Cybercrime (ETS No 185) (2001)
Internet sources
BCS 2022 https://www.bcs.org/articles-opinion-and-research/light-touch-approach-to-ai-regulation-welcomed-by-it-industry-body/
BCS 2022 Light Touch Approach to AI Regulation Welcomed by IT Industry Body https://www.bcs.org/articles-opinion-and-research/light-touch-approach-to-ai-regulation-welcomed-by-it-industry-body/ accessed 25 November 2024
Brand 2023 https://www.swart.law/post.aspx?id=107
Brand D 2023 The European Union's Artificial Intelligence (AI) Act in a Nutshell https://www.swart.law/post.aspx?id=107 accessed 1 July 2024
Brand 2024 https://law.mpg.de/event/artificial-intelligence-and-the-law-an-african-perspective/
Brand D 2024 Artificial Intelligence and the Law: An African Perspective https://law.mpg.de/event/artificial-intelligence-and-the-law-an-african-perspective/ accessed 1 July 2024
Connock and Stephen 2022 https://singularityhub.com/2022/06/19/ai-shakespeare-and-ai-oscar-wilde-debate-machine-creativity-at-oxford/
Connock C and Stephen A 2022 AI Shakespeare and AI Oscar Wilde Debate Machine Creativity at Oxford https://singularityhub.com/2022/06/19/ai-shakespeare-and-ai-oscar-wilde-debate-machine-creativity-at-oxford/ accessed 1 July 2024
EU 2024 https://artificialintelligenceact.eu/the-act/
European Union 2024 Artificial Intelligence Act https://artificialintelligenceact.eu/the-act/ accessed 25 November 2024
Grayling 2023 https://grayling.com/news-and-views/balancing-innovation-and-responsibility-examining-the-uks-light-touch-approach-to-ai-regulation/
Grayling 2023 Balancing Innovation and Responsibility: Examining the UK's Light-Touch Approach to AI Regulation https://grayling.com/news-and-views/balancing-innovation-and-responsibility-examining-the-uks-light-touch-approach-to-ai-regulation/ accessed 25 November 2024
Jarovsky 2024 https://www.linkedin.com/pulse/un-adopts-first-global-resolution-ai-luiza-jarovsky-8mzrc
Jarovsky L 2024 UN Adopts First Global Resolution on AI. Luiza's Newsletter #96 https://www.linkedin.com/pulse/un-adopts-first-global-resolution-ai-luiza-jarovsky-8mzrc accessed 1 July 2024
Kahn 2024 https://www.superhuman.ai/p/ai-gadgets?utm-source=
Kahn Z 2024 Jobs Most Likely to Be Affected by AI https://www.superhuman.ai/p/ai-gadgets?utm-source= accessed 1 July 2024
MacCallum 2023 https://www.bbc.com/news/technology-65102210.amp
MacCallum S 2023 UK Rules Out New AI Regulator https://www.bbc.com/news/technology-65102210.amp accessed 1 July 2024
Manancourt 2024 https://www.politico.eu/article/rishi-sunak-ai-us-eu-forge-britian-london-chatgpt/
Manancourt V 2024 Rishi Sunal Dithers on AI as the US and EU Forge Ahead https://www.politico.eu/article/rishi-sunak-ai-us-eu-forge-britian-london-chatgpt/ accessed 1 July 2024
Mashishi 2023 https://www.dcdt.gov.za/images/phocadownload/AI_ Government_Summit/National_AI_Government_Summit_Discussion_Document.pdf
Mashishi A 2023 South Africa's Artificial Intelligence (AI) Planning https://www.dcdt.gov.za/images/phocadownload/AI_Government_Summit/National_AI_Government_Summit_Discussion_Document.pdf accessed 1 July 2024
SirBacon.org 2022 https://sirbacon.org/bacon-forum/index.php?/topic/193-ai-artificial-intelligence-shakespeare-and-francis-bacon/
SirBacon.org 2022 AI (Artificial Intelligence), Shakespeare, and Francis Bacon https://sirbacon.org/bacon-forum/index.php?/topic/193-ai-artificial-intelligence-shakespeare-and-francis-bacon/ accessed 25 November 2024
Thorne 2024 https://businesstech.co.za/news/government/768147/south-africas-proposed-ai-plan-needs-a-rework-experts/
Thorne S 2024 South Africa’s Proposed AI Plan Needs a Rework: Experts https://businesstech.co.za/news/government/768147/south-africas-proposed-ai-plan-needs-a-rework-experts/ accessed 25 November 2024
UNESCO 2022 https://unesdoc.unesco.org/ark:/48223/pf0000381137
United Nations Educational, Scientific and Cultural Organisation 2022 Recommendation on the Ethics of Artificial Intelligence. SHS/BIO/PI/2021/1 https://unesdoc.unesco.org/ark:/48223/pf0000381137 accessed 25 November 2024
Vitanov 2021 https://www.europarl.europa.eu/doceo/document/A-9-2021-0232_EN.html
Vitanov P 2021 Report on Artificial Intelligence in Criminal Law and Its Use by the Police and Judicial Authorities in Criminal Matters. A9-0232/2021 https://www.europarl.europa.eu/doceo/document/A-9-2021-0232_EN.html accessed 25 November 2024
Waem, Dauzier and Demircan 2024 https://www.technologyslega ledge.com/2024/03/fundamental-rights-impact-assessments-under-the-eu-ai-act-who-what-and-how/
Waem H, Dauzier J and Demircan M 2024 Fundamental Rights Impact Assessments under the EU AI Act: Who, What and How? https://www.technologyslegaledge.com/2024/03/fundamental-rights-impact-assessments-under-the-eu-ai-act-who-what-and-how/ accessed 1 July 2024
White House 2023 https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
White House 2023 Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ accessed 25 November 2024
List of Abbreviations
AGI |
Artificial General Intelligence |
---|---|
AI |
Artificial Intelligence |
AIISA |
Artificial Intelligence Institute of South Africa |
AU |
African Union |
DCDT |
Department of Communications and Digital Technologies |
DPIA |
Data Protection Impact Assessments |
EIA |
Ethical Impact Assessment |
Escom |
Electricity Supply Commission |
EU |
European Union |
FRIA |
Fundamental Rights Impact Assessment |
IT |
Information Technology |
OECD |
Organisation for Economic Co-operation and Development |
PELJ |
Potchefstroom Electronic Law Journal |
SALJ |
South African Law Journal |
UK |
United Kingdom |
UNESCO |
United Nations Educational, Scientific and Cultural Organisation |
US/USA |
United States of America |