Loading...
 

AI v. Ecological Wisdom

One White Bit (see a more polemical version in Sublime Magazine & other articles)
One White Bit
DRAFT VERSION of a published article
for the Journal of Writing in Creative Practice,
Volume 17, Issue 2, Apr 2024, p.127-136

By John Wood
Emeritus Professor of Design
Goldsmiths University of London
November 2024
One White Bit
Figure 1

Figure 1: Arms race between learning simulators v. plagiarism detectors

One White Bit

INTRODUCTION

In seeking to extend humanity’s long-term future, several reports from UNESCO have called for the re-purposing of education in order to catalyse cultural change across the world. Ideally, this would entail re-imagining education’s deep ecological purpose. But this means acknowledging that universities emerged from two separate traditions that see the purpose of learning very differently. In the atelier culture of artisanship, tacit and embodied practices of learning are the norm. By contrast, the dominant tradition is more scholastic and bookish and therefore focuses more on language-based knowledge that is able to be written. One way to achieve the change that UNESCO seeks is to encourage more creative and opportunity-finding approaches. This would mean encouraging thinking processes that are more imaginative and future-oriented, and less embedded in extant knowledge. Of course, this would make it harder to retain the traditional emphasis on fairness and transparency over curiosity and learning. Recently the task also became even more challenging when mainstream institutions decided to normalise the use of AI systems as an adjunct to traditional learning. In seeking ways to reconcile all of the above issues, this article calls for a radical revisioning of the term ‘wisdom’. A suitably revised definition (known as ‘Wisdom’) would need to be much broader, holistic, pluralistic, ecosystemic and, therefore, less anthropocentric.

Keywords

artificial intelligence (AI), academic rigour, Wisdom, education, purpose, tacit knowledge, creativity

Re-purposing the Education Paradigm

The idea that the creative studio practices lack the credibility enjoyed by more scholastic disciplines remains the education system’s ‘elephant in the room’. Our universities grew from several traditions of practice, the dominant of which was the mediaeval monastic culture that surrounded writing. This began before print technologies were available and was characterized by solitary study and a slavish and fastidious culture of copying sacred texts from one book to another. This might explain scholasticism’s long-standing identification with the concept of ‘academic rigour’. I graduated from art school with a humble diploma but was assured that it had ‘degree status’. This was only a few years
after the UK government’s 1960 Coldstream Report had approved the awarding of honours degrees to art, design and craft graduates. Our lecturers told us that our rise in status had been approved on the condition that academic writing be included in the syllabus. As a result, many art schools decided to entice lecturers from the humanities, to help artists and designers to write in the style of ‘proper researchers’. Ironically, it was many decades before a closer reading of the original report documents revealed that Coldstream’s mission had largely been ‘to lend academic credibility’ to studio practices. More crucially, its report made no mention of writing per se (Lockheart 2022).

In 2002, a few of us (i.e. art school lecturers at Goldsmiths, Royal College of Art and Central St Martins) established the ‘Writing-PAD’ network. We wanted to challenge received assumptions behind the way that students of art, design and craft were required to write essays. So the ‘P’ in our acronym ‘PAD’ referred to the assumed ‘Purpose’ behind writing. We knew this was provocative. As a fine artist, it is customary to start work without any sense of what you are doing or why you are doing it. But what is the purpose of the academic writing process? How, if at all, is writing meant to relate to one’s studio practice? Would a purposeless essay have credibility? Would it be OK if the intention had been,
simply, to surprise oneself with an unexpected discovery? Of course, these kinds of issues are usually negotiated within the ethical and managerial context of a marking regime. For example, would it be permissible for students to ‘fake’ the purpose of their essays, post hoc? More recently, with the arbitrary imposition of artificial intelligence (AI) systems, these long standing questions have returned to torment us again.

In the past many art, design and craft students felt inadequate when judged within an academic culture that saw little value in neurodiversity. Today, we are less likely to associate dyslexia-like symptoms with a lack of intelligence. Indeed, recent research (Cross et al. 2024) has shown that people with dyslexia or dyscalculia to be less biased than others. Yet, while ChatGPT is increasingly being used to camouflage these academic ‘shortcomings’, other studies (Baker and Hawn 2022) remind us that AI is not less biased than human beings. But there is an even deeper and important factor, here. Whereas AI merely simulates the learning process using language models, human knowing is ineffably distributed around the mind–body. Whereas data or information is represented by inflexible codes such as algorithms, human knowledge is always distributed in real time across living systems. This raises profound ethical and political questions about the precise location and status of human knowledge (cf. Polanyi 1962; McGilchrist 2009; Robinson 2016). As Polanyi has argued: ‘All knowledge is tacit knowledge if it rests on our subsidiary awareness of particulars in terms of a comprehensive unity’ (1962: n.pag.). I would imagine that this bold claim might irritate or baffle some traditional scholars but it explains why academics in art, design and craft resort to examination by human judgement, rather than by more mechanical means.

Politicians and senior academics love using the terms ‘scholarly rigour’ or ‘academic rigour’, probably because it sounds decisive and imperious. However, I have long argued that the word ‘rigour’ is too ambivalent to be fit for purpose (Wood 2000, 2012). While I can agree that thoroughness and attentiveness to detail may be useful within learning and research the root word ‘rigour’ implies dry, dead or stoney things rather than living ones. This is why I wince whenever I see ‘research’ confused with ‘publication’, travel to distant conferences or success in writing and publication. The over-identification of learning with narrow scholasticism, bureaucratic procedures or algorithmic fixity are unlikely to meet UN’s calls for education reform. Accountancy is at least 5000 years old, so humans are accustomed to surrendering qualities in exchange for quantities. By ignoring the practical fact that ‘1’ is never quite equal to ‘1’, arithmetic legitimizes questions designed to have only one correct answer. As an invention it was cunning and useful but over-enthusiasm for Platonic or Cartesian truths can erode our human value systems.

Classical science espoused a strong faith in a universe founded on dependable ‘laws’. For some subjects, it may be reasonable to manage approximations as though they were absolutes. After all, this approach has given us aeroplanes that seldom crash and bridges that rarely fail. On the other hand, the belief system within art and design is less deterministic. Artists tend to see the world as defined more by exceptions than by rules (Ljubec 2022). This raises some challenging questions about education’s proof of purpose and its managerial etiquette. The more importance that we place on creating new meaning or producing innovative artefacts of learning (say, within fine art, design or metadesign) the less reason there is for classifying it according to generic or extant standards. I was a serial failure in my school maths exams, so I felt especially proud when my 9-year-old grandson regularly scored 100% for his maths tests. But when his sister received 96% for her beautiful paintings, I became more concerned. How would such arbitrary and misleading feedback affect a young artist’s development? In 1968, George Land and Beth Jarman found some creativity tests designed by NASA and discovered that 98% of 5-year-olds given the same tests registered as creative geniuses. However, they subsequently found that 10-year-olds only achieved around 30%. The 15-year-olds only managed 12% and of the 25-year-olds group, only 2% qualified as creative geniuses (Turnipseed and Darling-Hammond 2015). They concluded that our capacity to think in uncreative ways is a habit we are taught, rather than acquire naturally (cf. Robinson and Aronica 2016).

In 1945, the American inventor Vannevar Bush envisaged a military device for writing and reading that would acknowledge the reader’s cognitive idiosyncrasies. His prototype – the ‘Memex’ – was a ‘mechanised private file and library’ that would broker cross-disciplinary insights that might otherwise be missed by traditional field experts or narrow specialists. Whereas academic disciplines have tended to established subject-based genres of thinking, Bush foresaw the need to acknowledge what Ted Nelson, his successor, called ‘intertwingularity’. As Nelson put it: ‘there are no subjects at all; there is only knowledge, since the cross-connections among the myriad topics of this world simply cannot be divided up neatly’ (2015: 133–50). The ‘Memex’ enabled users to follow their own associative leaps, set up personal mnemonic markers and create their own ‘information trails’. Bush’s idea led subsequent inventors to what is now called ‘hypertext’. By using a superset of simple mark-up codes, Tim Berners-Lee wanted documents to be interoperable by all – i.e. safely out of reach of the giant corporations. This worked for a while but the system has now reverted to serving the proprietary needs of a few enormous tech companies. Berners-Lee had designed a document protocol with attached meta-data that described the specific style, context or genre chosen by the author. He called this a document type definition (DTD). Unfortunately once his generic version (i.e. without DTD) began to circulate around the world it unexpectedly became accepted as a new standard (i.e. HTML). Most of us were ready to accept the convenience of automatic search and other gadgets, even if it meant accepting what is, almost literally, the beguiling presentation of mediocrity. Today we are being pushed around by reading and writing machines that are smarter, but not wiser, than hypertext.

Humans are stupid. We love to be fooled by cute gadgets and pets. We are so charmed by AI that we have not even bothered to challenge its raison d’être. The Turing test showed us how readily we hallucinate another person’s presence from the weakest or dubious of signals. Instead, we decided to use it as a benchmark for robotics engineering. Nor is this the first time we have allowed inanimate products to push us around. Clocks are spectacularly ignorant, yet we let them tell us when we are hungry or tired. We know that money has no intrinsic value, yet we have allowed it to make some of us want more than we could ever spend. In the late 1980s, I created a hypertext software system that worked as a personalized authoring system. I figured that, although busy artists and designers needed to arm themselves with relevant facts and information they could not afford to spend too much time in the library. My ‘IDEAbase’ system used SGML (a superset of HTML) and anticipated methods subsequently found in Wiki technology. One of its features was that it allowed authors to assign ‘associative keywords’ to the more generally used terms that were visible to the reader. Hidden from the reader, these acted as ‘personal’ mnemonics, thus enabling web-based documents to identify one another automatically, even if the visible texts seemed unrelated. In short, it was intended to encourage individual creativity and to make it easier for like-minded mavericks to find one another in a human way. Most importantly, it was author-centred rather than reader centred.

Until we have decided to train a new species of automata that will outperform us in everything from warfare to having fun, maintaining immediate and direct human control over the content, style and genre of documents is vital. With all this in mind, I am dismayed by the fact that universities endorse AI in their studies, rather than bothering to rethink the existing paradigm. This is remarkable, given that undetectable plagiarism is now a fact of life. In 2024, vice-chancellors at the 24 Russell Group research-intensive universities agreed to a code of practice that asked students and staff simply to become more ‘AI literate’. They invited UK universities to exploit the opportunities of artificial intelligence whilst ‘maintaining academic rigour’ and upholding the importance of ‘integrity in higher education’. A little later the same year, undercover researchers at a UK university submitted exam answers generated by ChatGPT-4. Their deception was kept secret until the papers had been marked. Only 6% of the 33 papers submitted were detected as questionable. The rest achieved grades higher than those awarded to human students (Scarfe et al. 2024). Presumably these vice-chancellors disagreed with Stephen Hawking, who (in 2014) warned that AI ‘would take off on its own, and re-design itself at an ever increasing rate’ (cited in Cellan-Jones 2014: n.pag.). He said, ‘I fear that AI may replace humans altogether as a new form of life that will outperform humans’ (Hawking cited in Cellan-Jones 2014: n.pag.). Unless we soon reform old habits, assumptions and procedures academics could become impotent bystanders in the arms race between the simulators of learning and the bots designed to hunt them down.

Responding to the world’s worsening climate and biodiversity challenges, several UNESCO’s Futures of Education reports have suggested that we should re-think our education systems within a global strategy for change (Carney 2022). Presumably, this would mean ditching outmoded assumptions and replacing a few cherished practices, starting by questioning the deep purpose of learning and, therefore, the concept of (human) ‘knowledge’. Although we might need a broader, more ecocentric idea of ‘knowledge’, the last few thousand years of humanism that has left us with a presumptuous and individual-focused definition of ‘wisdom’, e.g. ‘the ability to use your knowledge and experience to make good decisions and judgments’ (Cambridge Online Dictionary n.d.: n.pag.).

How might we reframe ‘wisdom’ in a less anthropocentric and more ecocentric way? We might, for example, want to acknowledge the importance of ignorance alongside knowledge. We might also wish to find better ways to differentiate the many varieties of procedural, declarative, factual, tacit and other forms of human knowing. This larger map of ‘Wisdom’ (n.b. with a capital ‘W’) should also encompass collective knowledges, as individual knowledge is always incomplete and idiosyncratic. In the nineteenth-century Francis Galton (1822–1911) discovered that decisions/choices made by crowds can be more efficacious than those made by individual experts (Surowiecki 2005). Ideally, we may also need to coordinate the full spectrum of (Earthly) knowledges – i.e. all of the reasoning and communication exchanges that overlap and interconnect to create the resilience of whole ecosystems. In short, I would tentatively re-define ‘Wisdom’ as a requisite variety of working and thinking knowledges that help to co-sustain our Earthly web of living systems.

Over the last half century, capitalism has increasingly glamorized the notion of ‘data’, partly because it is elusive, but also because engineers made it conform to the numerical certainties of accountancy. In human terms, it is merely an intangible and imagined subset of ‘information’. Conversely, information is only data that makes sense to humans. Advocates of AI may disagree, especially as most of us seem to be becoming habitually dependent upon it. If we define ‘intelligence’ in the context of a standard IQ test, it is tempting to agree with the claim that AI gadgets perform intelligently. This is fortuitous for AI evangelists. But whereas AI systems are able to ‘answer’ questions by manipulating alphanumerical characters, humans draw upon more situated, embodied and emergent aspects of knowing. As Wittgenstein noted: ‘Everyday language is a part of the human organism and no less complicated than it. It is not humanly possible to gather immediately from language what the logic of language is’ (1963: 35).

Frankly, as these concepts of ‘knowledge’ are too complex to grasp fully we may reach for binary terms, such as ‘knowing that’ and ‘knowing how’. This raises awkward questions about exactly when, where and in what tangible context/s these acts of ‘knowing’ take place. For example, what is the relative importance of a viva voce relative to a written thesis? Where and what, exactly, is being evaluated in a Ph.D.? Should an external examiner evaluate the candidate’s ability to craft a readable, precise and original book? Or is it more valid to estimate their acquired ability to offer useful, convincing and unrehearsed answers to questions by an expert in the field? This dilemma becomes even more acute when we try to evaluate subjects such as art and design. University exam boards paper over these cracks by applying the same summative procedures for all degrees, whether in fine art, economics or physics.

As our definition of wisdom includes all of the non-human wisdoms that are sustaining the planet, Jakob von Uexküll’s (2010) use of the term ‘Umwelt’ is useful. Although it appears to describe different phenomenological horizons of experience that limit communication between species it would encourage us to find new ways to bridge them. Perhaps this would help us to develop new modes of rapport with other species. This suggests that Wisdom would be ecosystemic, infinitely extensive and a superset of eco-semiotic metaphenomena.
One White Bit
One White Bit Screenshot 2024 11 17 At 21.05.47

One White Bit Fig. 2 - The Johari Window (Saxena, 2015)

One White Bit
Borrowed from Luft and Ingham’s (1955) ‘Johari window’ (see Figure 2), Donald Rumsfeld’s famous distinction between ‘known unknowns’ and ‘unknown unknowns’ is important. If possible, we might want to extend the map to include things we know, yet remain unaware that we know them (cf. Ehrig and Foss 2022). The same applies to the
many levels of self-reflexive modes of meta-knowledge, such as scepticism or irony. We would also need to include the emotional types of intelligence (Goleman 2001) that inform the importance of empathy and other second-order modes of understanding.
Figure 3 offers a thumbnail sketch of possible performance indicators that might be used to map out a Wisdom-based learning paradigm.
One White Bit
One White Bit Screenshot 2026 03 30 At 18.41.34

Fig 3 - Possible criteria for evaluating ‘Wise’ learning systems.

One White Bit

References

  1. Baker, R. S. and Hawn, A. (2022), ‘Algorithmic bias in education’, International Journal of Artificial Intelligence in Education, 32:4, pp. 1052–92, https://doi.org/10.1007/s40593-021-00285-9. Google Scholar
  2. Carney, S. (2022), Reimagining Our Futures together: A New Social Contract for Education, Paris: UNESCO. Google Scholar
  3. Cellan-Jones, R. (2014), ‘Stephen Hawking warns artificial intelligence could end mankind’, BBC, 2 December, https://www.bbc.co.uk/news/technology-30290540. Accessed 20 October 2025.
  4. Cross, L., Atherton, G. and Nicolson, R. I. (2024), ‘People with dyslexia or dyscalculia are less biased: Results of a preregistered study from over 450,000 people on the implicit association test’, Neurodiversity, 2, https://doi.org/10.1177/27546330241288164. Google Scholar
  5. Ehrig, T. and Foss, N. J. (2022), ‘Unknown unknowns and the treatment of firm-level adaptation in strategic management research’, Strategic Management Review, 3:1, pp. 1–24, https://doi.org/10.1561/111.00000035. Google Scholar
  6. Goleman, D. (2001), ‘Emotional intelligence: Issues in paradigm building’, The Emotionally Intelligent Workplace, 13, p. 26. Google Scholar
  7. Ljubec, Ž. (2022), ‘Becoming polyphibious’, in J. Wood (ed.), Metadesigning Designing in the Anthropocene, Abingdon: Routledge, pp. 219–30. Google Scholar
  8. Lockheart, J. (2022), Languaging Design: In Metadesigning Designing in the Anthropocene, Abingdon: Routledge. Google Scholar
  9. McGilchrist, I. (2009), The Master and His Emissary: The Divided Brain and the Making of the Western World, New Haven, CT: Yale University Press. Google Scholar
  10. Nelson, T. H. (2015), ‘What box?’, in D. R. Dechow and D. C. Struppa (eds), Intertwingled: The Work and Influence of Ted Nelson, Cham: Springer, pp. 133–50. Google Scholar
  11. Polanyi, M. (1962), ‘Tacit knowing: Its bearing on some problems of philosophy’, Reviews of Modern Physics, 34:4, p. 601, https://doi.org/10.1103/RevModPhys.34.601. Google Scholar
  12. Robinson, K. and Aronica, L. (2016), Creative Schools: The Grassroots Revolution That’s Transforming Education, London: Penguin Books. Google Scholar
  13. Scarfe, P., Watcham, K., Clarke, A. and Roesch, E. (2024), ‘A real-world test of artificial intelligence infiltration of a university examinations system: A “Turing Test” case study’, PloS One, 19:6, https://doi.org/10.1371/journal.pone.0305354. Google Scholar
  14. Surowiecki, J. (2005), The Wisdom of Crowds, London: Vintage. Google Scholar
  15. Turnipseed, S. and Darling-Hammond, L. (2015), ‘Accountability is more than a test score’, Education Policy Analysis Archives, 23:11, p. 11, https://doi.org/10.14507/epaa.v23.1986. Google Scholar
  16. Von Uexküll, J. (2010), ‘The theory of meaning’, in D. Favareau (ed.), Essential Readings in Biosemantics: Anthology and Commentary, Cham: Springer, pp. 81–114. Google Scholar
  17. ‘wisdom’ (n.d.), Cambridge Online Dictionary, https://dictionary.cambridge.org/dictionary/english/wisdom#google_vignette. Accessed 12 September 2025.
  18. Wittengenstein, L. (1963), Tractatus Logico-Philosophicus, London: Routledge & Kegan Paul.
  19. Wood, J. (2000), ‘The culture of academic rigour: Does design research really need it?’, The Design Journal, 3:1, pp. 44–57, https://doi.org/10.2752/146069200789393599. Google Scholar
  20. Wood, J. (2012), ‘In the cultivation of research excellence, is rigour a nobrainer?’, Journal of Writing in Creative Practice, 5:1, pp. 11–26, https://doi.org/10.1386/jwcp.5.1.11_1. Google Scholar
  21. Reese, B. (2018), The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, New York: Atria Books. Google Scholar