NetIKX Programme for 2018

The first meeting of 2018 is on Thursday 25 January:

Making true connections in a complex world: new technologies to link facts, concepts and data – Thursday 25 January 2018

A pdf giving detail of the meeting will be available shortly.

The full 2018 programme will be announced shortly.

Advertisements
Posted in Uncategorized | Leave a comment

Taxonomy Boot Camp London

A 25% discount on the fee at the Taxonomy Boot Camp London (17–18 October) has been negotiated for NetIKX members. See http://www.netikx.org/forum/netikx-member-discount-taxonomy-boot-camp-2017 for the discount code.

Posted in Uncategorized | Leave a comment

The Implications of Blockchain for KM and IM

Conrad Taylor writes:

The speakers at the meeting on 6 July 2017 were Marc Stephenson, Noeleen Schenk  and John Sheridan.

Marc Stephenson is the Technical Director at Metataxis. He has worked on the design, implementation and ongoing management of information systems for over 25 years, including organisations in health, central and local government, banking, utilities, new media and publishing. He has architected and implemented many IT solutions, ranging from intranets, document management systems, records management systems, and ECM portals. Marc recognises the need to design solutions that deliver maximum benefit at minimal cost, by focusing on the business, users and crucially the information requirements, rather than unnecessary technology and functionality.

Noeleen Schenk has over twenty years’ experience of working in the information sector as a practitioner, researcher and consultant. Her recent  projects have focused on all aspects of information and knowledge management – from governance to assurance, helping clients successfully manage their information and minimise the risk to their information assets. These projects include information security, information and data handling, information risk management, document and records management. In addition to working with clients, Noeleen is passionately interested in the constantly changing information and knowledge management landscape, the use of technology, and new ways of working – helping business identify critical changes, assess the opportunities then develop options and map out strategies to turn them into reality, taking advantage of the opportunities they present us.

John Sheridan is the Digital Director at The National Archives, where he leads the development of the organisation’s digital archiving capability and the transformation of its digital services. John’s academic background is in mathematics and information technology, with a degree in Mathematics and Computer Science from the University of Southampton and a Master’s Degree in Information Technology from the University of Liverpool. John recently led, as Principal Investigator, an Arts and Humanities Research Council funded project, ‘big data for law’, exploring the application of data analytics to the statute book. More recently he helped shape the Archangel research project, led by the University of Surrey, looking at the applications of distributed ledger technology for archives. A former co-chair of the W3C e-Government Interest Group, John has a strong interest in web and data standards. He serves on the UK Government’s Open Standards Board, which sets data standards for use across government. John was an early pioneer of open data and remains active in that community.

Blockchain is a technology that was first developed as the technical basis for the cryptocurrency Bitcoin, but there has been recent speculation that it might be useful for various information management purposes too. There is quite a ‘buzz’ around the topic, yet it is too complex for many people to figure out, so it’s not surprising that the seminar attracted the biggest turnout of the year so far.

The seminar took the form of three presentations, two from the consultancy Metataxis and one from The National Archive. The table group discussions that followed were simply open and unstructured discussions, with a brief period at the end for sharing ideas.

The subject was indeed complex and a lot to take in. In creating this piece I have gone beyond what we were told on the day, done some extra research, and added my own observations. I hope this will make some things clearer, and qualify some of what our speakers said, especially where it comes to technical details.

Marc Stephenson gives a technical overview

The first speaker was Marc Stephenson, Technical Director at Metataxis, the information architecture and information management consultancy. In the limited time available, Marc attempted a technical briefing.

Marc’s first point was that it’s not easy to define blockchain. It is not just a technology, but also a concept and a framework for ways of working with records and information; and it has a number of implementations, which differ in significant ways from each other. Marc suggested that, paradoxically, blockchain can be described as ‘powerful and simple’, but also ‘subtle, and difficult to understand’. Even with two technical degrees under his belt, Marc confessed it had taken him a while to get his head around it. I sympathise!</p>

The largest and best-known implementation of blockchain so far is the infrastructure for the digital cryptocurrency ‘Bitcoin’ – so much so that many people get the two confused (and others, in my experience, think that some of the features of Bitcoin are essential to blockchain – I shall be suggesting otherwise).

Wikipedia (at http://en.wikipedia.org/wiki/Blockchain)   offers this definition:

A blockchain […] is a distributed database that maintains a continuously growing list of ordered records called blocks. Each block contains a timestamp and a link to a previous block. By design, blockchains are inherently resistant to modification of the data — once recorded, the data in a block cannot be altered retroactively. Through the use of a peer-to-peer network and a distributed timestamping server, a blockchain database is managed autonomously… [A blockchain is] an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. The ledger itself can also be programmed to trigger transactions automatically.

Marc then dug further into this definition, but in a way which left some confused about what is specific to Bitcoin and what are the more generic aspects of blockchain. Here, I have tried to tease these apart.

Distributed database — Marc said that a blockchain is intended to be a massively distributed database, so there may be many complete copies of the blockchain data file on server computers in many organisations, in many countries. The intention is to avoid the situation in which users of the system have to trust a single authority.

I am sceptical as to whether blockchains necessarily require this characteristic of distribution over a peer-to-peer network, but I can see that it is valuable where there are serious issues of trust at stake. As we heard later from The National Archive, it is also possible to create similar distributed ledger systems shared between a smaller number of parties which already trust each other.

Continuously growing chain of unalterable ‘blocks’  — The blockchain database file is a sequential chain divided into ‘blocks’ of data. Indeed, when blockchain was first described by ‘Satoshi Nakamoto’, the pseudonymous creator of the system in 2008, the phrase ‘block chain’ was presented as two separate words. When the database is updated by a new transaction, no part of the existing data structure is overwritten. Instead, a new data block describing the change or changes (in the case of Bitcoin, a bundle of transactions) is appended to the end of the chain, with a link that points back to the penultimate (previous) block; which points back to the previous one; and so on back to the ‘genesis block’.

One consequence of this data structure is that a very active blockchain that’s being modified all the time grows and grows, potentially to monstrous proportions. The blockchain database file that maintains Bitcoin has now grown to 122 gigabytes! Remember, this file doesn’t live on one centralised server, but is duplicated many times across a peer-to-peer network. Therefore, a negative consequence of blockchain could be the enormous expense of computing hardware resources and energy involved in a blockchain system.

(As I shall later explain, there are some peculiar features of Bitcoin which drive its bloat and its massive use of computational resources; for blockchains in general, it ain’t necessarily so.)

Timestamping — when a new block is created at the end of a chain, it receives a timestamp. The Bitcoin ‘timestamp server’ is not a single machine, but a distributed function.

Encryption — According to Marc, all the data in a blockchain is encrypted. More accurately, in a cryptocurrency system, crucial parts of the transaction data do get encrypted, so although the contents of the blocks are a matter of public record, it is impossible to work out who was transferring value to whom. (It is also possible to implement a blockchain without any encryption of the main data content.)

Managed autonomously — For Bitcoin, and other cryptocurrencies, the management of the database is done by distributed software, so there is no single entity, person, organisation or country in control.

Verifiable blocks — It’s important to the blockchain concept that all the blocks in the chain can be verified by anyone. For Bitcoin, this record is accessible at the site bitcoin.info.

Automatically actionable — In some blockchain systems, blocks may contain more than data; at a minimum they can trigger transfers of value between participants, and there are some blockchain implementations – Ethereum being a notable example – which can be programmed to ‘do’ stuff when a certain condition has been met. Because this happens without user control, without mediation, all of the actors can trust the system.

Digging into detail

In this section, I am adding more detail from my own reading around the subject. I find it easiest to start with Bitcoin as the key example of a blockchain, then explore how other implementations vary from it.

‘Satoshi Nakamoto’ created blockchain in the first place to implement Bitcoin as a digital means to hold and exchange value – a currency. And exchange-value is a very simple thing to record, really, whereas using a blockchain to record more complex things such as legal contracts or medical records adds extra problems – I’ll look at that later. Let’s start by explaining Bitcoin.

Alice wants to pay Bob. Alice ‘owns’ five bitcoins – or to put it more accurately, the Bitcoin transaction record verifies that she has an entitlement to that amount of bitcoin value: the ‘coins’ do not have any physical existence. She might have purchased them online with her credit card, from a Bitcoin broker company such as eToro. Now, she wants to transfer some bitcoin value to Bob, who in this story is providing her with something for which he wants payment, and has emailed her an invoice to the value of 1.23 BTC. The invoice contains a ‘Bitcoin address’ – a single-use identifier token, usually a string of 34 alphanumeric characters, representing the destination of the payment.

To initiate this payment, she needs some software called a ‘Bitcoin wallet’. Examples are breadwallet for the iPhone and iPad, or Armory for Mac, Linux and Windows computers. There are also online wallets. Users may think, ‘the wallet is where I store my bitcoins’. More accurately, the wallet stores the digital credentials you need to access the bitcoin values registered in the blockchain ledger against your anonymised identity.

Launching her wallet, Alice enters the amount she wants to send, plus the Bitcoin address provided by Bob, and presses Send.

For security, Alice’s wallet uses public–private key cryptography to append a scrambled digital signature to the resulting message. By keeping her private key secret, Alice is guaranteed that no-one can spoof Bitcoin into thinking that the message was sent to the system by anyone other than her. The Bitcoin messaging system records neither Alice’s nor Bob’s identity in the data record, other than in deeply encrypted form: an aspect of Bitcoin that has been criticised for its ability to mask criminally-inspired transactions.

At this stage, Alice is initiating no more than a proposal, namely that the Bitcoin blockchain should be altered to show her wallet as that bit ‘emptier’, and Bob’s a bit ‘fuller’. Implementing computers on the network will check to see whether Alice’s digital signature can be verified with her public key, that the address provided by Bob is valid, and that Alice’s account does in fact have enough bitcoin value to support the transaction.

If Alice’s bitcoin transaction proposal is found to be valid and respectable, the transaction can be enacted, by modifying the blockchain database (updating the ledger, if you like). As Marc pointed out, this is done not by changing what is there already, but by adding a new block to the end of the chain. Multiple transactions get bundled together into one Bitcoin block, and the process is dynamically managed by the Bitcoin server network to permit the generation of just one new such block approximately every ten minutes – for peculiar reasons I shall later explain.

Making a block: the role of the ‘hash’

The blocks are generated by special participating servers in the Bitcoin network, which are called ‘miners’ because they get automatically rewarded for the work they do by having some new Bitcoin value allocated to them.

In the process of making a block to add to the Bitcoin blockchain, the first step is to gather up the pending transaction records, which are placed into the body of the new block. These transaction records themselves are not encrypted, though the identities of senders and receivers are. I have heard people say that the whole blockchain is irreversably encrypted, but if you think about it for a second, this has to be nonsense. If the records were rendered uninspectable, the blockchain would be useless as a record-keeping system!

However, the block as a whole, and beyond that the blockchain, has to be protected from accidental or malicious alteration. To do this, the transaction data is put through a process called ‘cryptographic hashing’. Hashing is a well-established computing process that feeds an arbitrarily large amount of data (the ‘input’ or ‘message’) through a precisely defined algorithmic process, which reduces it down to a fixed-length string of digits (the ‘hash’). The hashing algorithm used by Bitcoin is SHA-256, created by the US National Security Agency and put into the public domain.

By way of example, I used the facility at http://passwordsgenerator.net/sha256-hash-generator to make an SHA-256 hash of everything in this article up to the end of the last paragraph (in previous edits, I should add; I’ve made changes since). I got 9F0B 653D 4E6E 7323 4E03 B04C F246 4517 8A96 DFF1 7AA1 DA1B F146 6E1D 27B0 CA75 (you can ignore the spaces).

The hash string looks kind of random, but it isn’t – it’s ‘deterministic’. Applying the same hashing algorithm to the same data input will always result in the same hash output. But, if the input data were to be modified by even a single character or byte, the resulting hash would come out markedly different.

Note that the hash function is, for all practical purposes, ‘one-way’. That is, going from data to hash is easy, but processing the hash back into the data is impossible: in the case of the example I just provided, so much data has been discarded in the hashing process that no-one receiving just the hash can ever reconstitute the data. It is also theoretically possible, because of the data-winnowing process, that another set of data subjected to the same hashing algorithm could output the same hash, but this is an extremely unlikely occurrence. In the language of Bitcoin, the hashing process is described as ‘collision-resistant’.

The sole purpose of this hashing process is to build a kind of internal certificate, which gets written into a special part of the block called the ‘header’. Here, cryptography is not being used to hide the transaction data, as it might in secret messaging, but to provide a guarantee that the data has not been tampered with.

Joining the hash of the transaction data in the header are some other data, including the current timestamp, and a hash of the header of the preceding block in the chain. These additions are what gives the blockchain its inherent history, for the preceding block also contained a hash of the header of the block before that, and so on down the line to the very first block ever made.

The role of the ‘miner’ in the Bitcoin system

Now, as far as I can tell, there is nothing in principle wrong with having the blockchain-building process run by one trusted computer, with the refreshed blockchain perhaps being broadcast out at intervals and stored redundantly on several servers as a protection against disaster.

But that’s not the way that Bitcoin chose to do things. They wanted the block-writing process to be done in a radically decentralised way, by servers competing against each other on a peer-to-peer network; they also chose to force these competing servers to solve tough puzzles that are computationally very expensive to process. Why?

Because intimately entangled in the way the Bitcoin ecology builds blocks is the way that new bitcoins are minted; at present the ‘reward’ from the system to a miner-machine for successfully solving the puzzle and making the latest block in the chain is a fee of 12.5 fresh new bitcoins, worth thousands of dollars at current exchange rates. That’s what motivates private companies to invest in mining hardware, and take part in the game.

This reward-for-work scheme is why the specialised computers that participate in the block-building competition are called ‘miners’.

Let’s assume that the miner has got as far through the process as verifying and bundling the transaction data, and has created the hash of the data for the header. At this point the Bitcoin system cooks up a mathematical puzzle based on the hash, which the ‘miner’ system making the block has to solve. These mathematical puzzles (and I cannot enlighten you more about their precise nature, it’s beyond me!) can be solved only by trial and error methods. Across the network, the competing miner servers are grinding away, trying trillions of possible answers, hashing the answers and comparing them to the header hash and the puzzle instructions to see if they’ve got a match.

This consumes a lot of computing power and energy – in 2014, one bitcoin ‘mining farm’ operator, Megabigpower in Washington state USA, estimated that it was costing 240 kilowatt-hours of electricity per bitcoin earned, the equivalent of 16 gallons of petrol. It’s doubtless gone up by now. The hashing power of the machines in the Bitcoin network has surpassed the combined might of the world’s 500 fastest supercomputers! (See ‘What is the Carbon Footprint of a Bitcoin?’ by Danny Bradbury: https://www.coindesk.com/carbon-footprint-bitcoin.

When a miner ‘thinks’ it has a correct solution, it broadcasts to the rest of the network and asks other servers to check the result (and, thanks to the hash-function check, though solving the problem is hard, checking the result is easy). All the servers that ‘approve’ the solution – strangely, it’s called a ‘nonce’ – will accept the proposed block, now timestamped and with a hash of the previous block’s header included to form the chainlink, and they update their local record of the blockchain accordingly. The successful miner is rewarded with a transaction that earns it a Block Reward, and I think collects some user transaction fees as well.

Because Bitcoin is decentralised, there’s always the possibility that servers will fall out of step, which can cause temporary forks and mismatches at the most recent end of the blockchain, across the network (‘loose ends’, you might call them). However, the way that each block links to the previous one, plus the timestamping, plus the rule that each node in the network must work with the longest extant version it can find, means that these discrepancies are self-repairing, and the data store is harmonised automatically even though there is no central enforcing agency.

The Bitcoin puzzle-allocation system dynamically adjusts the complexity of the puzzles so that they are being solved globally at a rate of about only six an hour. Thus, although there is a kind of ‘arms race’ between competing miners, running on ever faster competing platforms, the puzzles just keep on getting tougher and tougher to crack, and this is what controls the slow increase in the Bitcoin ‘money supply’. Added to this is a process by which the rate of reward for proof-of-work is being slowly decreased over time, which in theory should make bitcoins increasingly valuable, rewarding the people who own them and use them.

As I shall shortly explain, this computationally expensive ‘proof-of-work’ system is not a necessary feature of blockchain per se, and other blockchains use a less expensive ‘proof-of-stake’ system to allocate work.

Disentangling blockchain from Bitcoin

To sum up, in my opinion the essential characteristics of blockchain in general, rather than Bitcoin in particular, are as follows (and compare this with the Wikipedia extract quoted earlier):

  • A blockchain is a data structure that acts as a consultable ledger for recording sequences of facts, statuses, actions or transactions that occur over time. So it is not a database in the sense that a library catalogue is; still less could it be the contents of that library; but the lending records of that library could well be in blockchain form, because they are transactions over time.
  • New data, such as changes of status of persons or objects, are added by appending blocks of re-formed data; each block ‘points’ towards the previous one, and each block also gets a timestamp, so that together the blocks constitute a chain from oldest to newest.
  • The valuable data in the blocks are not necessarily encrypted (contrary to what some people say), so that with the right software, the record is open to inspection.
  • However, a fairly strong form of cryptographic hashing is applied to the data in each block, to generate a kind of internal digital certificate, which acts as a guarantee that the data has not become corrupted or maliciously altered. The hash string thus generated is recorded in the head of the block; and the whole head of the block will be hashed and embedded in the head of the following block, meaning that any alteration to a block can be detected.

And I believe we can set aside the following features which are peculiarities of Bitcoin:

  • The Bitcoin blockchain is a record of all the transactions that have ever taken place between all of the actors within the Bitcoin universe, which is why it is so giganormous (to coin a word). Blockchains that do not have to record value exchange transactions can be much smaller and non-global in scope – my personal medical record, for example, would need to journal only the experiences of one person.
  • All the data tracked by the Bitcoin blockchain has to live inside the blockchain; but blockchain systems can also be hybridised by having them store secure and verified links to other data repositories. And that’s a sensible design choice where the entire data bundle contains binary large objects (BLOBs) such as x-rays, scans of land title deeds, audio and video recordings, etc.
  • The wasteful and computationally expensive ‘proof of work’ test faced by Bitcoin miners is, to my mind, totally unnecessary outside of that kind of cryptocurrency system, and is a burden on the planet.

Marc shows a block

In closing his presentation, Marc displayed a slide image of the beginning of the record of block number 341669 inside the Bitcoin blockchain, from back in February 2015 when the ‘block reward’ for solving a ‘nonce’ was 25 Bitcoins. You can follow this link to examine the whole block on bitcoin.info: https://blockchain.info/block/0000000000000000062e8d7d9b7083ea45346d7f8c091164c313eeda2ce5db11. The PDF version of this article (see below) contains some screen captures of this online record.

That block carries records of 1,031 transactions, of a value of 1,084 BTC, and it is about 377 kB in size (and remember, these blocks add up!) The transaction record data can be clearly read, even thought it will not make much sense to human eyes because of the anonymisation provided by the encrypted user address of the sender, and the encrypted destination address for the receiver. Thus all we can see that ‘17p3BWzFeqh7DLELpodxt2crQjisvDbC95’ sent 50&nbsp;BTC to ‘1HEhEpnDhRMUEQSxSWeV3xBoxdSHjfMZJ5’.

Other cryptocurrencies, other blockchain methods

Bitcoin has had quite a few imitators; a July 2017 article by Joon Ian Wong listed nine other cryptocurrencies – Ethereum, Etherium Classic, Ripple, Litecoin, Dash, NEW, IOTA, Monero and EOS. (Others not mentioned include Namecoin, Primecoin, Nxt, BlackCoin and Peercoin.) That article also points to how unstable the exchange values of cryptocurrencies can be: in a seven-day period in July, several lost over 30% of their dollar values, and $7 billion of their market value was wiped out!

From our point of view, what’s interesting is a couple of variations in how alternative systems are organised. Several of these systems have ditched the ‘proof-of-work’ competition as a way of winning the right to make the next block, in favour of some variant of what’s called ‘proof-of-stake’.

As an example, consider Nxt, founded in late 2013 with a crowd-sourced donation campaign. A fixed ‘money’ supply of a billion NXT coins was then distributed, in proportion initially to the contributions made; from this point, trading began. Within the Nxt network, the right to ‘forge’ the next block in the transaction record chain is allocated partly on the basis of the amount of the currency a prospective ‘forger’ holds (that’s the Stake element), but also on the basis of a randomising process. Thus the task is allocated to a single machine, rather than being competed for; and without the puzzle-solving element, the amount of compute power and energy required is slight – the forging progess can even run on a smartphone! As for the rewards for ‘playing the game’ and forging the block, the successful block-forger gains the transaction fees.

Marc specifically mentioned Ethereum, founded in 2014–15, the currency of which is called the ‘ether’. In particular he referred to how Ethereum supports ‘Smart Contracts’, which are exchange mechanisms performed by instructions in a scripting language being executed on the Etherium Virtual Machine – not literally a machine, but a distributed computing platform that runs across the network of participating servers. Smart contracts have been explored by the bank UBS as a way of making automated payments to holders of ‘smart bonds’, and a project called The DAO tried to use the Etherium platform to crowd-fund venture capital. The scripts can execute conditionally – the Lighthouse project is a crowd-funding service that makes transfers from funders to projects only if the funding campaign target has been met.

Other uses of blockchain distributed ledgers

In October 2015, a feature article in The Economist pointed out that ‘the technology behind bitcoin lets people who do not know or trust each other build a dependable ledger. This has implications far beyond the cryptocurrency.’ One of the areas of application they highlighted was the secure registration of land rights and real-estate transactions, and a pioneer in this has been Lantmäteriet, Sweden’s Land Registry organisation.

Establishing a blockchain-based publicly inspectable record about the ownership (and transfer of ownership) of physical properties poses some different problems from those that simply transfer currency. The base records can include scans of signed contracts, digital photos, maps and similar objects. What Lantmäteriet aims to collect in the blockchain are what it dubs ‘fingerprints’ for these digital assets – SHA-256 hashes computed from the digital data. You cannot tell from a fingerprint what a person looks like, but it can still function as a form of identity verification. As a report on the project explains:

‘A purchasing contract for a real estate transaction that is scanned and becomes digital is an example. The hash that is created from the document is unique. For example, if a bank receives a purchasing contract sent via email, the bank can see that the document is correct. The bank takes the document and run the algorithm SHA-256 on the file. The bank can then compare the hash with the hash that is on the list of verification records, assuming that it is available to the bank. The bank can then trust that the document really is the original purchasing contract. If someone sends an incorrect contract, the hash will not match. Despite the fact that email has a low level of security, the bank can feel confident about the authenticity of the document.’ (‘The Land Registry in the blockchain’ — http://ica-it.org/pdf/Blockchain_Landregistry_Report.pdf)

In the UK, Her Majesty’s Land Registry has started a project called ‘Digital Street’ to investigate using blockchain to allow property ownership changes to close instantaneously. Greece, Georgia and Honduras have similar projects under way.

In Ghana, there is no reliable nationwide way of registering ownership of land and property, but a nonprofit project called Bitland is drawing up plans for a blockchain-verified process for land surveys, agreements and documentation, which – independent of government – will provide people with secure title (www.bitland.world). As they point out, inability to prove ownership of land is quite common across Africa, and this means that farmers cannot raise bank capital for development by putting up land as security.

Neocapita is a company that is developing Stoneblock as a decentralised blockchain-based registration service for any government-managed information, such as citizen records. They are working in collaboration with the United Nations Development Program, World Vision, and two governments (Afghanistan and Papua New Guinea), initially around providing a transparent record of aid contributions, and land registry.

Noeleen Schenk on blockchain and information governance

After Marc Stephenson had given his technical overview of Blockchain, Noeleen Schenk (also of Metataxis) addressed the issue of what these developments may mean for people who work with information and records management, especially where there are issues around governance.

Obviously there is great interest in blockchain in financial markets, securities and the like, but opportunities are also being spotted around securing the integrity of the supply chain and proving provenance. Walmart is working with IBM on a project that would reliably track foodstuffs, from source to shelf. The Bank of Canada is looking towards using blockchain methods to verify customer identities onwards, on the basis that the bank has already gone through identity checks when you opened your account. Someone in the audience pointed out that there are also lots of applications for verified records of identity in the developing world, and Noeleen mentioned that Microsoft and the UN are looking at methods to assist the approximately 150 million people who lack proof of identity.

Google DeepMind Health is looking at using some blockchain-related methods around electronic health records, in a concept called ‘Verifiable Data Audit‘, which would automatically record every interaction with patient data (changes, but also access). They argue that health data needn’t be as radically decentralised as in Bitcoin’s system – a federated structure would suffice – nor is proof-of-work an appropriate part of the blockmaking process in this context. The aim is to secure trust in the data record (though ironically, DeepMind was recently deemed to have handled 1.6 million Royal Free Hospital patient records inappropriately).

Noeleen referred to the ISO standard on records management, ISO 15489-1, which gives as the characteristics of ‘authoritative records’ – meeting standards for authenticity, reliability, integrity and usability. What has blockchain to offer here?

Well, where a blockchain is managed on a decentralised processing network, one advantage can be distributed processing power, and avoidance of the ‘single point of failure’ problem. The use of cryptographic hashes ensures that the data has not been tampered with, and where encryption is used, it helps secure data against unauthorised access in the first place.

Challenges to be solved

Looking critically at blockchain with an information manager’s eye, Noeleen noticed quite a few challenges, of which I highlight some:

  • Private blockchains are beginning to make their appearance in various sectors (the Walmart provenance application is a case in point). This raises questions of what happens when different information management systems need to interoperate.
  • In many information management applications, it is neither necessary nor desirable to have all of the information actually contained within the block (the Lantmäteriet system is a case in point). Bringing blockchain into the picture doesn’t make the problem of inter-relating datasets go away.
  • Blockchain technology will impact the processes by which information is handled, and people’s roles and responsibilities with that process. Centres of control may give way to radical decentralisation.
  • There will be legal and regulatory implications, especially where information management systems cross different jurisdictions.
  • Noeleen has noticed that where people gather (with great enthusiasm) to discuss what blockchain can do, there seems to be very poor awareness amongst them of well-established record-keeping theory, principles, and normal standards of practice. The techies are not thinking enough about information management requirements.

These issues require information professionals to engage with the IT folks, and advocate the incorporation of information and record-keeping principles into blockchain projects, and the application of information architectural rigour.

Intermediate discussion

Following Noeleen’s presentation, there were some points raised by the audience. One question was how, where the blockchain points to data held externally, that external data can itself be verified, and how it can be secured against inappropriate access.

Someone made the point that is is possible to set up a ‘crypotographic storage system’ in which the data is itself encrypted on the data server, using well established public–private key encryption methods, and therefore accessible only to those who have access to the appropriate key. As for the record in the blockchain, what that stores could be the data location, plus the cryptographic hash of the data, so that any tampering with the external data would be easy to detect.

What blockchain technology doesn’t protect against, is bad data quality to start with. I’m reminded of a recent case in which it emerged that a sloppy clinical coder had entered a code on a lady’s record, indicating that she had died of Sudden Infant Death Syndrome (happily, she was very much alive). That transaction can never be erased from the blockchain – but it doesn’t stop the record being corrected after.

John Sheridan: Blockchain and the Archive: the TNA experience

Our third presentation was from John Sheridan, the Digital Director at The National Archives (TNA), with the title ‘Application of Distributed Ledger Technology’. He promised to explain what kinds of issues the Archive worries about, and where they think blockchains (or distributed ledgers more generally) might help. On the digital side of TNA, they are now looking at three use-cases, which he would describe.

John remarked that the State gathers information ‘in order to make Society legible to it’ – so that it might govern. Perhaps The Domesday Book was one of the world’s first structured datasets, collected so that the Norman rulers might know who owned what across the nation, for taxation purposes. The Archive’s role, on the other hand, is to enable the citizen to see the State, and what the State has recorded, by perusing the record of government (subject to delays).

Much of the ethos of the TNA was set by Sir Hilary Jenkinson, of the Public Record Office (which merged with three other bodies to form TNA in 2003). He was a great contributor to archive theory, and in 1922 wrote A Manual of Archive Administration (text available in various formats from The Internet Archive, https://archive.org/details/manualofarchivea00jenkuoft). TNA still follows his attitude and ideas about how information is appraised and selected, how it is preserved, and what it means to make that information available.

An important part of TNA practice is the Archive Descriptive Inventory – a hierarchical organisation of descriptions for records, in which is captured something of the provenance of the information. ‘It’s sort of magnificent… it kind of works,’ he said, comparing it to a steam locomotive. But it’s not the best solution for the 21st century. It’s therefore rather paradoxical that TNA has been running a functional digital archive with a mindset set that is ‘paper all the way down’ – a straight line of inheritance from Jenkinson, using computers to simulate a paper record.

Towards a second-generation digital archive

It’s time, he said, to move to a second-generation approach to digital archive management; and research into disruptive new technologies is important in this.

For the physical archive, TNA practice has been more or less to keep everything that is passed to it. That stuff is already in a form that they can preserve (in a box), and that they can present (only eyes required, and maybe reading spectacles). But for the digital archive, they have to make decisions against a much more complex risk landscape; and with each generation of technological change, there is a change in the digital preservation risks. TNA is having to become far more active in making decisions about what evidences the future may want to have access to; and, which risks they will seek to mitigate, and which ones they won’t.

They have decided that one of the most important things TNA must do, is to provide evidence for purposes of trust – not only in the collection they end up with, but also in the choices that they have made in respect of that collection. Blockchain offers part of that solution, because it can ‘timestamp’ a hash of the digital archive asset (even if they can’t yet show it to the public), and thereby offer the public an assurance, when the archive data is finally released, that it hasn’t been altered in the meantime.

Some other aims TNA has in respect of the digital archive include: being more fluid about how an asset’s context is described; dealing with uncertainties in provenance, such as about when a record was created; and permitting a more sophisticated, perhaps graduated, form of public access, rather than just now-you-can’t-see-it, now-you-can. (They can’t simply dump everything on the Web – there are considerations of privacy, of the law of defamation, of intellectual property and more besides.)

The Archangel project

Archangel is a brand new project in which TNA is engaged together with the University of Surrey’s Centre for the Digital Economy and the Open Data Institute. It is one of seven projects that EPSRC is funding to look at different contexts of use for distributed ledger technology. Archangel is focused specifically on public digital archives, and the participants will try to work with a group of other memory institutions.

The Archangel project will not be using the blockchain methods that Marc had outlined. Apparently, they have their own distributed ledger technology (DLT), with ‘permissioned’ access.

The first use-case, which will occupy them for the first six months, will focus on a wide variety of types of research data held by universities: they want to see if they can produce sets of hashes for such data, such that at a later date, when the findings of the research are published and the data is potentially archived, any question of whether the data has been tampered with or manipulated can be dealt with by cryptographic assurance spread across a group of participating institutions. (The so-called ‘Climategate’ furore comes to mind.)

The second use-case is for a more complex kind of digital object. For example, TNA preserves the video record of proceedings of The Supreme Court. In raw form, one such digital video file could weigh in at over a terabyte! Digital video transcoding methods, including compression algorithms, are changing at a rapid pace, so that in a decade’s time it’s likely that the digital object provided to the public will have to have been converted to a different file format. How is it possible to create a crypographic hash for something so large? And is there some way of hashing not the bit sequence, but the informational content in the video?

It’s also fascinating to speculate about how machines in future might be able to interpret the informational content in a video. At the moment, a machine can’t interpret the meaning in someone’s facial expressions – but maybe in the future?

For this, they’ll be working with academics who specialise in digital signal processing. They are also starting to face similar questions with ‘digital surrogates’ – digital representations of an analogue object.

The third use-case is about Deep Time. Most people experimenting with blockchain have a relatively short timescale over which a record needs to be kept in verifiable form, but the aspirations of a national archive must looks to hundreds, maybe thousands of years.

Another important aspect of the Archangel project is the collaboration that is being sought between memory institutions, which might reach out to each other in a concerted effort to underscore trust in each others’ collections. On a world scale this is important because there are archives and collections at significant risk – in some places, for example, people will turn up with Kalashnikovs to destroy evidence of human rights abuses.

 

Discussions and some closing thoughts

Table group discussions: NetIKX meetings typically feature a ‘second half’, which is made up of table-group discussions or exercises (syndicate sessions), followed by a summing-up plenary discussion. However, the speakers had not organised any focused discussion topics, and certainly the group I was in had a fairly rambling discussion trying to get to grips with the complexity and novelty of the subject. Likewise, there was not much ‘meat’ that emerged in the ten minutes or so of summing up.

One suggestion from Rob Begley, who is doing some research into blockchain, was that we might benefit from reading Dave Birch’s thoughts on the topic – see his Web site at http://www.dgwbirch.com. However, it’s to be borne in mind that Birch comes at the topic from a background in electronic payments and transactions.

My own closing thoughts: There is a lot of excitement – one might say hype – around blockchain. As Noeleen put it, in the various events on blockchain she had attended, a common attitude seems to be ‘The answer is blockchain! Now, what was the problem?’ As she also wisely observed, the major focus seems to be on technology and cryptocurrency, and the principles of information and records management scarcely get a look-in.

The value of blockchain methods seem to centre chiefly on questions of trust, using a cryptographic hashing and a decentralised ledger system to create a hard-to-subvert time-stamped record of transactions between people. The transactional data could be about money (and there are those who suggest it is the way forward for extending banking services in the developing world); the application to land and property registration is also very promising.

Another possible application I’m interested in could be around ‘time banking’, a variant of alternative currency. For example in Japan, there is a scheme called ‘Fureai Kippu’ (the ‘caring relationship ticket’), which was founded in 1995 by the Sawayaka Welfare Foundation as a trading scheme in which the basic unit of account is an hour of service to an elderly person who needs help. Sometimes seniors help each other and earn credits that way, sometimes younger people work for credits and transfer them to elderly relatives who live elsewhere, and some people accumulate the credits themselves against a time in later life when they will need help. It strikes me that time-banking might be an interesting and useful application of blockchain – though Fureai Kippu seems to get on fine without it.

When it comes to information-management applications that are non-transactional, and which involve large volumes of data, a blockchain system itself cannot cope: the record would soon become impossibly huge. External data stores will be needed, to which a blockchain record must ‘point’. The hybrid direction being taken by Sweden’s Lantmäteriet, and by the Archangel project, seems more promising.

As for the event’s title ‘ The implications of Blockchain for KM and IM’ — my impression is that blockchain offers nothing to the craft of knowledge management, other than perhaps to curate information gathered in the process.

Some reading suggestions

Four industries blockchain will disrupt (https://www.researchandmarkets.com/blog/4-industries-blockchain-will-disrupt)

Two billion people lack access to a bank account. Here are 3 ways blockchain can help them (https://www.weforum.org/agenda/2017/06/3-ways-blockchain-can-accelerate-financial-inclusion)

TED talk, Don Tapscott on ‘how the blockchain is changing money and how the blockchain is changing money and business (https://www.ted.com/talks/don_tapscott_how_the_blockchain_is_changing_money_and_business)

Why Ethereum holds so much promise (http://uk.businessinsider.com/bitcoin-ethereum-price-2017-7)

Wikipedia also has many valuable articles about blockchain, cryptographic hashing, etc.

Note:

The original version of this article can be found at http://www.conradiator.com/kidmm/netikx-jul2017-blockchain.html. You can also download a pdf (9 pages; 569 kB): http://www.conradiator.com/kidmm/netikx-resource/NetIKX-blockchain.pdf.

Posted in Uncategorized | 1 Comment

Developing Effective Collaborative Knowledge Spaces

Conrad Taylor writes:

During 2017, which is a 20th anniversary year for NetIKX, a number of eminent speakers have been invited to lead meetings, speakers who for the most part have addressed NetIKX before. At the meeting on 18 May 2017 the speakers were Paul Corney and Victoria Ward.

Paul worked for 25 years in top management in the City of London financial sector (Saudi International Bank and Zurich Reinsurance), and for the last couple of decades has pursued a ‘portfolio career’ as a business adviser, facilitator and business coach, with clients in 24 different countries, including Iran, Saudi Arabia, the Gulf States and several African countries.

Paul is also a managing partner at the Sparknow consultancy, which Victoria Ward founded in 1997. Victoria’s background is similarly in knowledge management in the banking sector. Sparknow approaches organisational KM using narrative enquiry methods, and Victoria can list amongst her former clients, a number of banks, government agencies, museums and cultural organisations, the World Health Organisation and the British Council.

Recently, Victoria and Paul have been working with Clive Holtham of the Cass Business School on a project looking at how the arrangement of space impacts the working environment, and knowledge sharing within that. Paul has been conducting a kind of rolling survey across various locations around the world. We in NetIKX would be the latest to add our thoughts; and Paul intends to publish a report as the summation of this enquiry.

Points of view

Paul and Victoria set up an exercise in which the forty or so people present were clustered into three groups, out of earshot of each other. Each group was then quietly told what ‘profession’ we were to adopt as our collective point of view. We were to carefully make an assessment of the room we were in, from that assumed profession’s point of view, and list the positive, and difficult, characteristics of the room. Then each group, through a spokesperson, would tell the others about their list of good or bad room features – and the other groups were supposed to guess that group’s profession!

Group One commented that the room was very white and light; that there were lots of power points. They notes there was quite a lot of furniture, but the tables were on wheels and easily moved; there were lots of nooks and crannies, and lots of potential for mess around the coffee machines. We guessed that they were cleaners! Group Two mentioned the functional design of the room; the low ceiling and narrow form of the room; and lots of natural light from the windows. They were interior designers!

I was in Group Three and I think we had the most fun assignment. We talked about there being a couple of useful exits including a fire exit onto the roof (with presumably a way off that down to street level); various valuables conveniently next to the door, and some rather nice looking IT equipment; perhaps too many windows to be able to operate unseen, but no CCTV cameras. Yes, we were the thieves!

That was a nice, fun ice-breaker, but it was also more than that, as Victoria and Paul explained. Things (and not just rooms!) look different according to your point of view. They had come across this exercise used in a very large gathering at the Smithsonian Museum, and it’s especially useful to deploy at the start of a meeting when you want to draw attention to how a thing, or a situation, might look very different from somebody else’s perspective; something that’s good to bear in mind when there are many stakeholders.

Perspectives on Knowledge Management

KM, or Knowledge Management, has been described as ‘a discipline focused on ways that organisations create and use knowledge’. However, said Paul, beyond that there is no single accepted definition of what KM is, and it’s a field with no agreed global standards as yet.

Paul works around the use of knowledge within businesses. In his newly published book ‘Navigating the Minefield: a practical KM Companion’ he has suggested some characteristics which could define ‘a good KM programme’, such as it being in support of a business’s goals, and aligned with its culture. One focus will be operational, seeking to cut the costs of doing business (in money or time) – in practice, this is the focus of four out of five KM projects in business. Some projects look in more strategic directions, towards innovation and future business benefit.

One paradox of KM is that many of the people who practice it,do not stay long term with their employers, but move on every few years to a new appointment. This can lead to the pursuit of short-term goals and ‘fighting fires’ rather than more strategic approaches.

How can you effectively transfer knowledge from an expert, to a wider community? One positive story Paul shared was of work he did with Cláudia Bandeira de Lima, a leading authority on childhood autism and language development in the Portuguese-speaking world. The solution they devised was to run a foundation programme in the methodology, PIPA (Programa Integrado Para o Autismo), teaching courses and accrediting practitioners.

To represent another aspect of KM, at the personal level, Paul used an image of a laptop. If it is stolen or breaks down, you can replace the hardware and the applications, but if you haven’t backed up the documents which constitute your knowledge resources, ‘you’re toast!’ In doing knowledge audits, he and Veronica often found that sloppy attitudes to managing digital knowledge resources were rampant. An American survey from a few years ago estimated that a typical cost to replace someone in a senior business position is in the region of $400,000 – because when the previous incumbent moved on, they took their knowledge with them, and nobody had done anything to ‘back it up’.

Drivers and definitions

What is driving this thing called ‘knowledge management’? Why do people do it? To Paul it seems that a major driver within many businesses is compliance with regulations; and in a couple of years, when ISO standards for knowledge management appear, it will likely be about compliance with those standards as well. ‘Already today, if you want to sell a locomotive, one of the criteria is that you engage in knowledge management, and are seen to do so in a very professional way,’ explained Paul.

A second driver is around innovation and process efficiency; people believe there is benefit to doing things better with what you have. And a third driver is the management of risk. And then, in some organisations at any rate, there are concerns about using KM to support governance, strategy and vision.

Paul used a simple ‘three pillars’ diagram to represent the above scheme, but his next diagram, giving some examples of motivators/drivers for KM in the real world, was more complicated and so we reproduce it here as an image, with his permission. He represented five different industrial sectors as examples: nuclear power, the regulatory sector, government, industry and the services sector.

In the nuclear industry, a key driver is planning for the complex process of decommissioning power plants at the end of their lives. Companies anticipate that when that time comes, they will be downsizing, and at the same time losing people with maybe 40 years of nuclear operations and decommissioning.

In the regulatory industry (as Paul and Victoria found through interviews in Canada some years back), a large problem is around succession planning as people at the top retire. This is similar to the driver for Shell’s ‘ROCK’ programme (Retention of Critical Knowledge), which they called it ‘The Great Crew Change’.

In government, ‘flexible working’ has been invoked as a mantra. As Paul and Victoria discovered in interviews at the Department of Justice, a possible effect of this is the diffusion of specialist knowledge, as working becomes more generic. But if this can be managed, services can be improved.

Enhancing manufacturing processes is a key driver for industry. Paul described a recent three-year project he ran for Iran’s largest company, which aimed to shorten the time it took from coming up with an idea, to bringing it to market.

In the services sector, including finance and legal work, Paul said that the key to business efficiency is the effective re-use of precedent; it is in this sector that ‘artificial intelligence’ is likely to have the greatest impact.

At this point, Jonathan from Horniman Museum said that he could identify with all those drivers; but in addition, their raison d’être at the Museum is the curation and transfer of knowledge to the general public. Victoria responded that she’d done work about ten years ago for the Museum Documentation Association, funded by the London Development Agency, looking at what museums contribute to the knowledge economy of London. (The MDA shortly relaunched itself as the Collections Trust.) Two things which she remembers well from that project, which were not represented by Paul’s diagram, were:

As work gets more ‘nomadic’ and fluid, workers in various industries need somewhere they can think of as an intellectual ‘home’; for fashion, it would be the V&A. But when that MDA study was conducted, it seemed that museums were overlooking their rôle in relation to certain professional knowledge networks.

Knowledge Transfer Officers can play a vital rôle as a ‘cog’ or enabling connector, between the more entrepreneurial innovators in the organisation and those whose instincts are more curatorial and conservative; between ‘fast cultures’ and ‘slow cultures’, if you like.

Co-working hubs

Costs as a driver

Paul referred to a 2013 UK government report on Civil Service reform, authored by Andy Lake of Flexibility.co.uk and called The Way We Work: a guide to smart working environments [http://www.flexibility.co.uk/downloads/TW3-Guide-to-SmartWorking-withcasestudies-5mb.pdf]. This pointed out that the costs of providing working environments, both financial and environmental, can be reduced by switching away from dedicated desks and PCs, to co-working hubs.

Paul hasn’t worked in an office for 20 years – his ‘office’ is just wherever he finds himself with his Mac and his ’phone and other devices. Sparknow had an office for about five years, but the team decided it wasn’t necessary as long as people were disciplined in their collaboration practices. An executive recruitment firm in the USA has offered the opinion that perhaps by 2020, 40% of people will be mobile workers (I presume they refer to office jobs only), and that they will be freelancers. The benefits will be lower operating costs and higher productivity..

With Prof Clive Holtham, Paul has been advancing the view that as these developments occur, organisations will have to ensure that working environments – be they physical like co-working hubs, or virtual like arrangements for remote working – will be conducive for effective Knowledge Management. (This is what we were going to be looking at for the rest of the day.)

Finally, there was about 20 minutes left. Rather than having some kind of delegated report-back from the table groups, as many might do, Ron improvised another Gurteen-like feature: he asked the groups to reflect on the process they had experienced and what they had learned from it; then half way through, asked half the people at each table to move to the next table and continue the same discussion.

Victoria Ward noted that when they first started doing Knowledge Audits, people never included looking at their ‘knowledge spaces’. They would look at their networks, their disciplines, but it always surprised the clients when they were asked how the physical workspace functioned. When asked to conduct a Knowledge Audit, they now ask to take a look at such spaces, and ask questions about how they are supported.

Good and bad knowledge spaces

‘Did you know that the average desk is occupied for only 45% of office hours?’ asked Paul. That’s what Will Hutton noted for the Work Foundation in 2002, in a report for the Industrial Society. The foundation claimed that the workplace (the office workspace, that is – not fields and factories, shops and warehouses) was being reinvented as ‘an arena for ideas exchange’ and a drop-in workspace for mobile workers: a place where professional and social interaction can occur. And the foundation noted that workspaces which are badly designed or badly managed can actually damage the physical and mental wellbeing of staff.

The firm of Ove Arup believes that the future of (office) workspace will be a network of locations – many of them on short leases or even pay-as-you-go, shared spaces rather than highly ‘territorial’ ones. Also, they believe there will be a corresponding flexibility in working interactions, operating across both physical and virtual environments.

The Edge. In January 2017, Paul was helping to run some events around the KM Legal conference in Amsterdam. At a Smart Working summit in 2016, Paul had heard of an amazing office building in Amsterdam called ‘The Edge’, so on this trip he made a visit to the place, and was shown around by the architect and the building manager. The building’s developer was OVG Real Estate and the design was by London-based PLP Architecture. The building’s main tenant is the consulting firm, Deloitte. There is a video about the place on YouTube – at https://youtu.be/JSzko-K7dzo – and Paul showed it to us. (There is also a Bloomberg article at https://www.bloomberg.com/features/2015-the-edge-the-worlds-greenest-building/)

The video claims that The Edge is ‘certifiably the greenest building in the world’, with its extensive use of natural light, and harvesting of solar power (the building is a net producer, not consumer, of electricity). Heat pumps circulate water through an insulated aquifer over a hundred metres below, to warm the building in winter and cool it in summer. From the viewpoint of our meeting topic, however, what is significant is how it is structured as a place for a new way of productive working, what the Dutch call het nieuwe werken.

Nobody gets a desk of their own at The Edge; Deloitte’s 2,500 workers there share 1,000 ‘hot desk’ locations, and can also access tiny cubicles or shared meeting facilities, some with massive flat screens which sync with laptops or mobiles. Workspaces are assigned to you according to your schedule for the day, and your ‘home base’ is any locker which you can find empty for the day.

Access to these facilities is driven by a smartphone app used by every worker, and a system which notes everyone’s location and needs and preferences and adjusts the local environment based on your preferences; this is supported by a distributed network of 28,000 sensors.

Paul also commented that people do really want to come to work at The Edge – that’s been a driver of recruitment, there is little absenteeism, and it is somewhere clients want to visit too.  Another thing that users of the building repeatedly praise is the use of natural daylight, which supplies 80% of lighting needs (including through a huge central covered atrium).

Ellipsis Media is a successful content management company, which started above a toy shop in Croydon. They used to have meetings around a particular table in the pub opposite, and as they grew into new premises, they bought that table and installed it as their own little bit of history. Paul mentioned other instances of companies (HSBC, Standard Chartered) using their office space to curate their history – the history of their internal community and its journey.

BMS. Paul also described his engagement with the world’s largest reinsurance broker, BMS, which used the opportunity of their move to One America Square near London Fenchurch Street station. The move brought 13 different federated business units into one shared location. As part of the move, BMS created collaborative physical spaces, including a meetings hub called ‘Connexions’ and an adjacent business lounge with the very best coffee, subsidised snacks and high-speed mobile Internet access. This had a great effect in helping people to break out of the silos of the formerly isolated business units (see Paul’s account of BMS’s journey at http://www.knowledgeetal.com/?p=465).

KHDA. During 2016, on his way back from Iran, Paul went to see friends in Dubai. The Dubai Knowledge and Human Development Authority (KHDA) manages secondary and higher education in Dubai. He showed us pictures of their open-plan workspace – you’ll often see the Chief Executive sitting there. It’s a very informal place – Paul even had a budgerigar fly past his head!

Asian Development Bank. Victoria and Paul worked together (as Sparknow) in Manila, on a project for the Asian Development Bank. ADB’s shared atrium space at the time included a touchcreen with a huge Google Earth display. Victoria added that ADB had long had a traditionally styled library, but had remodelled it, moving the bookshelves to the edge and creating an open central space. The Google map was put there, and used as an ‘attractor’ to cause people to slow down and encounter each other, to cut across the boundaries in the organisation. ADB used the space for a number of knowledge-sharing events, including ‘Inside Thursdays’.

ADB got Paul and Victoria to run a three-day workshop in that space, exploring the use of narrative in the ADB. They were able to construct a temporary collaborative knowledge space, with a long timeline laid out over connected tables, and workstations at which participants could mark out a map of the ADB’s history, and their hopes for its future – and to identify where interviews should be conducted with the oral history practitioners, and what kinds of questions should be asked.

That event was very memorable for its visual components, too. That pop-up knowledge space, and the shared creation of the timeline and other artefacts, created a useful and engaging memory for people when they then looked later at the products of the knowledge work.

ADB has published a paper about this in 2010, called ‘Reflections and Beyond’ (184 pages) which can be retrieved as a PDF from http://reflections.adb.org/wp-content/uploads/2014/08/adb-reflections-and-beyond.pdf. There is also a concise Sparknow narrative about the project at http://www.sparknow.net/publications/ADB_Reflections_Beyond_method.pdf.

Exercise set-up: the Knowledge Space Survey

Before we took our refreshment break, Paul gave a little background to a rolling project he has been co-ordinating, called the ‘Collaborative Knowledge Space Survey’. This qualitative enquiry had already gathered contributions, some by email, and some at events such as at a Masterclass he ran in March at the International Islamic University of Malaysia in Kuala Lumpur. Now it would be NetIKX participants’ chance to contribute!

Paul’s collaborators in collating and reviewing the results are Prof Clive Holtham and Ningyi Jiang at Cass Business School.

Ron followed this observation by some stories about how the COPD patients have been benefitting from the drop-in sessions, and how much they valued it.

To capture people’s ideas about ‘knowledge spaces’ at work (both physical and virtual ones), the survey has ten set questions, but the answers could be open-ended, in textual and often narrative form. There were certainly no multiple-choice answer mechanisms.

Around the walls of the room in which we were meeting, nine posters had been set out, each one with one of the survey questions (except the first question, ‘Which continent do you work in?’), and the space below left blank in readiness for our contributions. Paul asked us to peruse the questions during the break, and choose which of them we would personally like to work with. The nine remaining questions were:

  • Question 2 — Where do you have your most interesting work conversations and do your best work?
  • Question 3 — Do you think your own workspace encourages collaboration? Tell us about a recent incident where this happened and who was involved.
  • Question 4 — Are there any parts of your building or workspace which you associate with memorable moments of work? Tell us about a time and place when this happened.
  • Question 5 — How does where you work reflect the way you work?
  • Question 6 — Have you ever witnessed a company change its physical workspace radically? What happened?
  • Question 7 — What you you understand by the term ‘digital workspace’?
  • Question 8 — In your experience, can you now replace physical workspace with a digital workspace? If so, how? If not, why not?
  • Question 9 — Does your organisation have a workspace strategy, and if so does it include a digital workspace? Please tell us about it.
  • Question 10 — ‘Any Device, Any Time, Anywhere’ is how one organisation now defines its approach to remote working. Looking forward to 2020, what changes do you foresee in the way you work and the devices you will be using?

Each of us should gravitate towards the question that interests us most, and an ideal group size would be 4–6 people. Grouped around our question of choice, we should consider, is there a pattern or theme that we might use in a checklist? And what keywords might we use to ‘tag’ the responses which we chose?

The exercise process

The way our NetIKX group approached the Collaborative Knowledge Space Survey is not the only way it can be done. For a start, the way we assigned ourselves to particular questions meant that by and large each person contributed to thinking about only one of the nine questions – even though Paul declared the Open Space ‘law of two legs’, and we could have moved from one group to another. But the separate group discussions went well in the 30–40 minutes available.

Nobody was attracted to Question 4, and for obvious reasons Question 1 was off the table. Thus we collected reactions to eight out of the ten survey questions. It is decades since I had anything like a ‘regular job’ and worked in a workplace, so I chose to work in the group clustered around Question 8.

After we had filled our posters, Paul prompted each group in turn to share thinking with the rest of the room. You can see the posters themselves which Paul afterwards embedded as images within the slide set, and accompanying this blog post. I also took my recording gear with me around the room to capture what people said in more detail.

Q2: Where do you have your most interesting work conversations, and where do you do your best work? — This group discussed the value of having both quiet places and busy places. Melanie described the Hub at DWP, which is a large area with a coffee bar and lots of different tables. On the poster, the group had noted that humour and banter, for example around the kitchen, brings people together and leave you feeling motivated. When you’re on a journey, on a train, even just walking between places, this has value in freeing up ‘internal conversations – you often need silence ‘so you can hear yourself think’.

The keywords the group chose were human – flexible – adaptable – informal – fun – balance (between external and internal conversation, and between physical and digital) – mindfulness. Emma added that the most interesting and significant conversations are usually in an informal setting, and are often serendipitous.

Q3: Do you think your own work space encourages collaboration? — This group had started by comparing their own workspace experiences. Lissi referred to her ‘collaboration cocktail’ of spaces, ranging from attending NetIKX meetings to sitting up in bed to do her work. Victoria Ward had a range of spaces and reported positively on ‘Slack’ (slack.com) , a cloud-based service which describes itself as ‘real-time messaging, archiving and search for modern teams’ (it’s an acronymn for ‘Searchable Log of All Conversation and Knowledge’!)

Graham Robertson works largely alone and his workspace is a room with no windows. ‘Radical uncubicalisation’ was a phrase that came up from two organisations that are trying to draw people out of their cubicles. Edmund Lee (Historic England) said that when people get a taste of this, they love it, but you need other kinds of constraint in place to make things happen.

Collaboration, said someone, involves interaction between human and human, also human and information. Information has its own kind of structure around the workplace; but humans, it must be remembered, have other goals in life, even when they are at work: getting on with people, getting something to eat, whatever.

Q5: How does where you work reflect how you work? — Naomi Lees (DWP) said that the culture that you work in reflects the physical aspects of where you work. For example Ayo Onotola is a librarian who works in a prison. (It is a compulsory requirement for every prison to have a library as part of the process for the reform and rehabilitation of the inmates.) He said that it may surprise people to know that quite a proportion of the prisoners are illiterate; and many don’t have English as their first language; so the prison runs a number of educational programmes for them. The library is key to that.

But – when you work in a prison library, it is a bit like being a prisoner yourself! When there is some trouble in the prison, there may be a general lockdown, then nobody comes to the library all day. Prisoners’ behaviour in the library is quite different from how they behave in their cells; ‘they see the library as a cool place to come and chill out’, Ayo said. And they are keen to collect books to take back to their cells. (On the poster, there was a note that recently the library has been moved to the canteen space, and is now getting more use.)

David Penfold’s example was the university, which has many different possible work environments and people – staff and students both – move between them and find those that are most conducive to what they want to do right now. And people also do much of their work from home.

Q6 — Have you ever witnessed a company change its physical workspace radically? What happened? — ‘Hot desking’ inevitably came up within this group; one person spoke of a transformation to open plan, hot desking and a clear-desk policy, including senior managers. Yes, there was resistance to this at first, but people have come to realise how working together in this way has encouraged the sharing of ideas quite naturally through conversation. It has to be said that the facilities provided were very good. Prior discussion with the users had raised the need for spaces for private conversation, and they had been provided. There are also ‘meeting pods’ set in the middle of the canteen area.

Good space design is crucial, said another person. Consultations with staff in advance is the key. When he worked at the Department of Energy and Climate Change, they had discovered that often there were serendipitous meetings in lifts that then continued to an adjacent space to continue. In another job, at a research institute, staff had been worried about the place becoming too noisy for concentration; this was met by setting up booths with acoustic shielding, for study or for private conversation.

Canteen spaces are particularly ripe for creative use, and that goes well with a culture that encourages people to take lunch away from their desks. (I remember that when I was doing a series of training workshops at Enterprise Oil, their staff canteen provided lovely food free of charge, which was certainly a motivator in that direction!)

Q7: What do you understand by the term ‘digital workspace’? — This group understood a digital workspace to be something that was open and without boundaries, both boundaries of physical space and time, able to operate 24 hours a day, seven days a week. They noted that this requires broadband that is fast enough. It should enable you to do what you would do in a ‘normal’ or ‘standard’ workspace, but allows for collaboration and the sharing of information.

Paul referred to work he had done in Africa, where there is usually very poor access to the Internet. But people adapted to that by communicating via WhatsApp – short, asynchronous conversations that can be picked up again after a communications breakdown.

Q8: In your experience, can you now replace a physical workspace with a digital workspace? If so, how? If not, why not? — This was the group I was in, with Edmund Lee, and the first thing we decided was that the afternoon’s conversation had had an unspoken bias towards office-type work. If you are a plumber, a farmer or construction worker, a social worker or a surgeon, a shop assistant or other front-line customer service worker, what you can achieve in a digital workspace will be strictly limited. So no, you cannot replace physical workspaces with digital ones, except in some narrowly defined fields.

For most of those at today’s event, a so-called digital workspace can substitute for many aspects of the physical workspace, but that is dependent on how good a digital surrogate you can create to replace the physically of that with which you work. Edmund works with archaeological excavations, and he noted that before you can consider implementing a digital workspace for such work, you have to find a way to make a digital surrogate of the things you work with. An example would be an expert in Roman pottery who has access to the physical artefacts, but nothing more than a digital representation of the site where they were found.

Another issue is the functionality and ‘affordances’ of the digital tools available. There are bandwidth and infrastructual constraints, and there are human factors. When conversations take place over a digital medium, can they convey body language? Paul agreed that is a huge issue, and he had just been running a workshop with Chris Collison on improving work in virtual teams and communities. [Note that Chris Collison is the speaker at the September NetIKX meeting.]

There are also new skill requirements and support issues. Edmund told of how their IT department had installed large digital whiteboards in the main meeting rooms, but didn’t tell anybody how to use them. So, the technology worked for the IT department but for nobody else!

Q9: Does your organisation have a workspace strategy and if so, does it include digital workspace? — This did not result in a poster, but Malcolm Weston reported on their successes at Aecom, which now has grown (by a process of acquisition and amalgamation) to 187,000 employees across 150 countries. That process required workspaces to be brought together, and collaboration between the business units to be enhanced. And so Aecom did set out a formal workplace strategy, to be implemented in every office worldwide.

The implementation in London involved internal change management consultants, and interior design consultants, talking to different teams within the organisation asking them what they liked about their current workspace environment, and what they didn’t like, and what they wanted changed. The new environment was created in the offices in Aldgate Tower.

Aecom staff now work in ‘neighbourhoods’, with the colleagues in their team, though not always at the same desk. Teams which would naturally want to collaborate on delivering work are situated adjacent to each other. There are internal staircases between four of the floors, with open plan breakout areas around them all. There are also small ‘walled sofas’ suitable for taking part in a conference call, meeting rooms which can be booked via a phone app, through to a small lecture theatre. No-one has a fixed computer; everyone has a mobile phone and a laptop; there is secure WiFi. You can also work from home via the VPN.

One of the drivers was to improve customer satisfaction; another was to avoid costly redesign by getting things right through collaboration, first time around. It also meshed with Aecom’s collaborative selling initiative; clients like to come and have meetings at Aecom’s place.

Q.10: Looking forward to 2020, what changes do you foresee in the way you work and the devices you will be using? — This team described a Lloyds Bank demonstration of virtual presence, using a VR headset. They thought one of the challenges of the future might be the emphasis on self-service, and the variety of devices, and perhaps a decentralisation of information storage. The team chose ‘change’ and ‘disruption’ and ‘managing complexity’ as key phrases.

Steve Dale spoke of having recently finished some work for an international organisation spread across thirty countries. Their policy is ‘extreme BYOD’ (bring your own device – no rules at all about what equipment or software to use. They did an audit, and discovered sixty different systems in use. And there were concomitant problems – a lack of collaboration across teams, and how the hell do you find Stuff? They did interviews between stakeholders, and discovered a split between people who like this freedom, and others who flounder in this lack of structure (particularly people newly joining the organisation).

Wrapping up

Paul skipped a number of his slides, which review the survey responses from Lisbon, Kuala Lumpur etc. A couple of slides also pulled out some of the insights which are beginning to emerge in the analysis Paul is doing with Clive Holtham and Ningyi Jiang at Cass Business School.

Paul referred to a meeting he and Victoria had recently had with Neil Usher at BSkyB. Neil has a twelve-point checklist: Daylight – Connectivity – Space – Choice – Control – Comfort – Refresh – Influence – Storage – Colour – Wash – Inclusion. Paul didn’t have time to unpack what these mean; apparently they are explained on Neil’s blog (http://workessence.com/). The two most important aspects, according to Neil, are natural daylight, and giving people choices.

Paul finished the afternoon workshop drawing attention to some closing slides which give contact details for himself and for Victoria.

Paul J Corney – paul.corney[at]knowledgeetal

On Twitter: pauljcorney

On Skype: corneyp

On mobile: +44 (0) 777 608 5857

Victoria Ward — victoria.ward[at]sparknow.net

A personal thought on ‘digital’ vs ‘virtual’

Paul and Victoria contrasted physical spaces where people meet and converse, and digital ones. I would prefer a contrast between physical and virtual spaces. My reason is, that I wish to give the nod to old traditions of knowledge sharing, which used correspondence and publication — instances of which are the so-called Invisible College. Collaboration minus face-to-face did not need an electronic medium to get started; it required shared language, writing, and a means of sending messages.

Posted in Uncategorized | Leave a comment

Gurteen Knowledge Café – 16 March 2017

Conrad Taylor writes:

In 2017, its tenth anniversary year, NetIKX is running a series of meetings with speakers who have spoken to us before. In March we invited David Gurteen to speak around the topic of ‘entrained and entrenched thinking’, and other constraints on knowledge sharing – and, what we can do about it. Specifically we wanted him to run one of his Knowledge Café events for us, in part because that process incorporates features designed to widen the scope of conversation and the consideration of diverse points of view.

As usual, these notes are constructed from my personal perspective.

About entrenched and entrained thinking

‘Entrenched’ thinking is something we pretty much understand. It’s when people refuse to consider the validity of any idea but their own, and it is often encountered in groups that see themselves as actively in opposition to another group. They are ‘dug in’ and refuse to budge. We’ve seen a lot of that in politics in the last year, but it occurs in all sorts of social and business environments too.

The phrase ‘entrained thinking’ is less familiar. It may have been coined by Dave Snowden and Mary Boone in their article in Harvard Business Review in 2007, where they define it as ‘a conditioned response that occurs when people are blinded to new ways of thinking by the perspectives they acquired through past experience, training, and success’. They note that both leaders and experts can fall into entrained thinking habits, which cause them to ignore both insights from alternative perspectives and those offered by people whose opinions they have come to disregard as irrelevant.

Evolutionary biology suggests reasons why falling back on available quick-and-dirty patterns of thinking (heuristics) has survival advantages over thinking everything through carefully from every conceivable angle; as Dave Snowden says, when you come across a lion in savannah country, it’s best not to analyse the situation too thoroughly before legging it up a tree. In his book Administrative Behavior (1947), Herbert Simon referred to such just-good-enough thinking as satisficing, and a study of the nature and role of heuristics in decision making was also central to Amos Tversky’s and Daniel Kahneman’s argument in Judgement Under Uncertainty: Heuristics and Biases (1982), which also introduced the concept of cognitive bias – a concept to which David Gurteen made reference.

However, there are times and situations in which it is good to cast a wider net for alternative ideas, which may turn out to be better than the established, so-called ‘tried and tested’ ones. The technique of brainstorming was pioneered in the field of advertising by Alex Osborn in 1939, and Edward de Bono introduced the concept of lateral thinking in 1967, following that with a veritable spate of books on creative thinking techniques.  (Note: The brainstorming process has been brought into question recently; see http://www.newyorker.com/magazine/2012/01/30/groupthink)

In this seminar and Knowledge Café workshop, David Gurteen focused on those blockages to ideas production and sharing that can occur in meetings and group conversations, and the actual practice of his Café technique shows some ways this can be done. So let’s get to understand the Café process, then move on to how David introduced our session, and close with a brief report of what came up in the closing in-the-round plenary session.

Introducing the Café process

David’s Knowledge Café process is a version of the World Café technique first devised in the mid 1990s by Dr Juanita Brown and David M Isaacs, an American couple who work with organisations to unleash ‘collective intelligence’ through ‘conversations that matter’. (See http://www.theworldcafe.com/) These techniques have been used by industrial companies, such as Hewlett-Packard and Aramco, and by local governments and non-profits in the context of social and economic development and community relations.

David says that he adopted the format as an antidote to ‘Death by PowerPoint’. He started running his Knowledge Café series in September 2002, in the Strand Palace Hotel. A number of NetIKX members have taken part in free London Knowledge Café events, which David facilitates about six times a year. More information can be found on his knowledge café website http://knowledge.cafe/

David has also run such sessions professionally for organisations across Europe, Asia and the Americas. They seem to work well in a wide range of cultural settings – even, he said, in those Asian cultures in which people often defer to authority. In a small group, it is easier to speak up about what you think, though as an organiser of such an event you may need to ensure that the groups are made up of equals.

The essence of the Café technique is to enable discussion around an open-ended question. Participants are divided into groups of three, four or at most five people, sat around tables (note: this is smaller than the typical size of a table group at a NetIKX workshop). In David’s events, the starting question is framed by having a speaker make a very short initial presentation of the topic – typically ending with the posing of such a question.

After the discussion around tables has gone on for some time, generally 15 minutes, the facilitator asks the table groupings to break up and re-form – for example, two people might stay on the same table while two move off to join other tables. After another 15 minutes’ conversation, the table groups are once again re-organised for a third round. David never uses more than three rounds of conversation in his own practice. The general aim of such Café techniques is to help people to accumulate a cascade of perspectives, and to widen their thinking.

There are variations on this theme. One World Café practice is to put a large sheet of paper on each table and encourage people to jot down ideas or doodle pictures during their conversations, so that the next group gets a sense of what has gone before. Another version appoints a ‘table host’ who stays with the table, relays ideas from the previous round, and encourages the new group to add ideas and perspectives to what has gone before. Such a person might also act as a rapporteur in a closing plenary session.

David’s practice dispenses with table-level facilitators (and doodle pads and rapporteurs), which makes a Gurteen Café easier to organise. The accumulation of perspectives tends to happen anyway, as people tend to share, with their new group, the ideas that came up in the previous one.

In David’s version of the Café, he said, there is no reporting back. The café’s outcomes are about what each individual takes away in his or her head – and that will be different for each person. As Theodore Zeldin says, the best conversations are those from which you emerge as a slightly different person.

However, David later qualified that by mentioning circumstances in which gathering and reporting the ideas that surface can be very valuable as a contribution to a problem-solving process – for a company or project, for example. His own general events tend to end these days with a closing session bringing everyone together in a circle for freeform sharing of ideas ‘in the round’ – space permitting. We did this at the NetIKX Café.

David explained a few guiding principles. The Café is about dialogue, not debate – it’s not about winning arguments, but nor is it about seeking consensus. The idea is simply to bring ideas and issues to the surface. And it is OK to drift off-topic.

Asked whether it is different to run a Café session inside a particular organisation, David responded that he’s found that the format can be used for brainstorming or as an adjunct to problem-solving; in that case, one should start in advance by defining the objective, and design the process accordingly. For such gatherings, you probably do want to include a method for capturing the ideas that arise. But any such capture mechanism must not get in the way of the conversation – giving one person a flipchart and a pen will put them in charge and distort the free exchange of ideas.

Our meeting topic

David explained that in his short pre-Café presentation he would touch on some challenges that we need to overcome in order to make better decisions, and to be more creative in coming up with ideas. In the café process we would then discuss how we might mitigate against these challenges.

Cognitive bias. David recommended that we take a look at the Wikipedia article about ‘Cognitive bias’. That in turn links to a page ‘List of cognitive biases’ — something like 200 of them, although it has been argued they can be grouped into four broader categories, arising either from too much information, not enough meaning, the need to act/decide quickly, and limits of memory. One of the ideas that has made it into common parlance recently is ‘confirmation bias’ – we tend to pay heed to ideas that reinforce our existing views.

Entrained thinking. This seems to be a relatively new idea, put forward by Dave Snowden and Mary Boone as described above. The idea is that we are conditioned towards certain ways of thinking, and it can be because of our education and training. We are also influenced by our culture, the environment in which we have grown up, and our experiences. These influences are so subtle and ingrained that we are probably not aware of them.

David asked me (Conrad) if I see things the same way. I replied that I do – but that although ‘entrained thinking’ appears to be a new term, it isn’t really a new idea. When I was studying the History of Science at Glasgow University, an influential book was Thomas Kuhn’s The Structure of Scientific Revolutions (1962) – the book that introduced the phrase ‘paradigm shift’ to the English language. Kuhn argued that scientific progress was not, as generally assumed, a matter of development by accumulation of facts and theories, but more episodic, involving the overthrow of previously prevailing ways of seeing the world. And until the new paradigm prevails, the old one will have its deeply entrained defenders.

One example that Kuhn analysed at length was ‘the Copernican revolution’, which argued that the earth orbits the sun, rather than the other way around. Darwin’s theory of evolution also met with strong opposition from people invested in a creationist narrative and Biblical timescale for earth’s existence, and more recently the theory of plate tectonics and continental drift was resisted and mocked until the 1960s – yet it is now one of the ground truths of geological science. So Kuhn’s idea of a ‘paradigm’ – as a way of thinking that one considers normal and natural (but may later be replaced by a better one) – does carry in it a notion similar to ‘entrained thinking’.

Entrenched opinions. People may be resistant to taking new ideas on board – they take an entrenched position. Such people are not prepared to listen; they ‘know they are right’ and refuse to consider an alternative interpretation. In this case people may be very conscious of their views, which are closely bound up with their sense of themselves.

‘Speaking truth to power’ is a phrase that we hear a lot – it could mean not being afraid to say something to your boss, even though the consequences for you could be dire. The phrase recognises that power relations influence whether we choose to express our thoughts and views openly.

Loss of face. If you’ve always got to look good, it’s very difficult to speak up.

The Spiral of Silence – also called ‘social silencing’ – is an idea David encountered only recently. It’s a concept in political science and mass communication theory put forward by Elisabeth Noelle-Neumann, who argues that fear of isolation, of being excluded or rejected by society because you hold a minority opinion, may stop you from speaking out. She also argues that the media not only shape what ideas are the dominant ones, but also what people perceive to be the dominant ideas, even though that perception may not accord with reality. (Much of the mainstream media is telling us that the British public are united in a determination to leave the EU, for example.)

A related critique of social media – Facebook, for example – is that it encourages people to live in bubbles of confirmation bias, connecting us to people who share the same ideas as ourselves.

Groupthink is a well known term. Perhaps people in a meeting do all think the same way – or is there a sizeable group who think differently and just don’t want to rock the boat?

Last on David’s list was facilitator bias – was he, for example, in a position to bias our thinking?

The questions for the Café

So here were a few barriers that can get in the way of a good conversation, and thus impoverish group creativity and problem solving. David invited us to go into Café mode and talk about how to overcome these problems.

In the promotional text for this meeting, we had asked three questions, and David suggested that perhaps each ‘round’ of the Café might look at these.

  • The first question is, what factors in people’s backgrounds, professional education and culture, lead to them having a ‘blinkered’ view of the range of available opinions and policy decisions, especially at work? How might this be mitigated?
  • Second, when we meet together in groups to decide something in common, to come to a practical decision, what meeting dynamics are getting in the way of us accessing the broadest possible range of opinions and inputs? Could we be running those meetings differently and getting better results?
  • Finally, what are those two questions forgetting to consider?

Big Circle discussion notes

After three rounds of ‘Café table talk’, we rolled the tables out to the edges of the room and created a circle of chairs (there were about forty of us), and continued the conversation in that mode for about 25 minutes. I’m not going to report this blow by blow, but highlight some ideas put forward, as well as comment on the process.

It’s worth pointing out that the dynamics of discussion in the larger group were (as one might expect) very different from in the small groups. Some people said a lot, while about half said nothing at all. For the first nine minutes, about ten people spoke, and all were men. There was a tendency for one person to make a point that was reacted to by another person and then another and so on, in a ‘chain reaction’, even if that meant we drifted off topic. For about five minutes, the tone across the room got quite adversarial. So while the technique of making a big circle does help to gather in what had been thought across the table groups in a Knowledge Café, it can have its demerits or perils.

Meeting management methods. Steve Dale mentioned that at the LGA’s Improvement and Development Agency, there was a manager who used to take his team out on a walk – about ten people – and they talked as they walked. People wondered how practical that was! David Penfold suggested that if they walked and talked in groups of three, then they could stop at the pub and have the whole-group session – a Knowledge Café on legs!

Steve also pointed out that in some meetings – with fixed time and a business agenda – a free-flowing conversation would waste time and get in the way. Various people noted that one could loosen up thinking with a Café-style session or brainstorming, and follow that with a decision-making meeting – preferably after a fallow period for reflection.

Someone outlined a method she finds useful for eliciting a wide range of contributions. Pose an issue and get people to reflect on it individually for a while, in silence; then ‘pair and share’ with one other, to practise articulating your own ideas and also listening to others. Then you can progress to groups of four; then feed back from the groups to the whole assembly. When you are divided into small groups, we noted, the dominant types can only dominate a small group!

Dominance in group situations. Gender dominance or imbalance can affect the dynamic in discussions; so could dominance by age or ethnicity. Clare Parry spoke of occasions when someone from a minority makes a point and it is ignored; then someone from the majority says it, and suddenly it is a fantastic idea. These biases might be subconscious; but a younger person thought that discounting the opinions of younger people could actually be a quite conscious bias, based on the opinion that older people are more likely to know what they are talking about.

Bad paradigm shifts and entrainment. I (Conrad) thought it would be a mistake to think that paradigms always shift in the right direction! An example might be an assumption that information management is something that computer people do… We debated for a while whether this assumption was as widespread as 20 years ago: opinion differed.

Dion Lindsay, in his work around both information and knowledge management, finds that information professionals make a huge assumption that they are the best people to lead an organisation’s efforts in knowledge management. They see a continuum between librarianship, information management and knowledge management – which is not how the rest of the organisation sees things. And that, he said, is an example of entrained thinking (on both sides, perhaps).

Unfortunately, but predictably for this NetIKX crowd, this issue of IM and KM and IT set off a long series of exchanges about the rights and wrongs of managing information in a technology environment, which strayed right off the point – and got quite heated!

Influencing culture from the top down. One table conversation speculated that if a bunch of people at board level have got stuck in a rut with a particular way of doing things, it could be mitigated by bringing in someone with different thinking – like a football team appointing a maverick football manager to shake things up. On the other hand, this could backfire if ‘the old guard’ react by resisting and subverting the ‘outsider’.

An open, learning culture. Stuart Ward argued that organisational culture can be a positive influence on how decisions are made  – if the people at the top visibly promote diverse thinking by asking people for inputs and opinions. Nor should people be penalised for making mistakes, if the result is learning that can be shared to improve future results.

We came to no shared and agreed conclusions – but that’s not what a Knowledge Café does.  Everyone took something different away in their heads.

Posted in Uncategorized | 3 Comments

Survey Results

Naomi Lees, NetIKX Membership Secretary writes:

Thank you to everyone who responded to our NetIKX survey earlier this year. We had some very interesting and useful responses.

Here is a brief overview of the points raised, and what NetIKX plans to do over the next 12 months:

Programme Planning

We had some very useful feedback on the seminar topics you would like to see, especially around the future and value of KIM; as well as practical KIM tools and techniques. You will be pleased to know that we will be covering all these aspects and more in our programme in 2017 and early 2018, so check www.netikx.org/events or https://netikx.wordpress.com/events/ for details of future events.

We also had some other useful suggestions for future seminar topics, which our programme planner has taken away for further cogitation! Watch this space for further updates.

Events outside London

You said that you would like to see more events outside London – we are currently looking at ways we can make this happen. If you are keen to host an event outside London, please get in touch.

Partnering with other KIM Groups

We had some very encouraging feedback on developing partnerships with other KIM groups. You will be pleased to know that we have a KIM Communities event coming up soon. We are always interested in building connections with other KIM groups, so please get in touch if you have any ideas for joint-working.

NetIKX Website

We received several comments on the website and we are really grateful for this feedback. You will be pleased to know that we are currently working on a new website, with lots of the features you have asked for, such as more KIM resources and the ability to make electronic payments.

The survey results can be viewed here: https://www.surveymonkey.com/results/SM-VZXFF523/

 

Posted in Uncategorized | Leave a comment

Information Design, with Conrad Taylor and Ruth Miller

On 26 January 2017, the speakers at the NetIKX meeting were Conrad and Ruth. Conrad has written up the two talks below. A fuller account of his own talk can be found on his Conradiator site at http://www.conradiator.com/kidmm/netikx-infodesign-conrad.html, as he notes below.

Photo David Dickinson

For some comments on the meeting, by Claire Parry, see the very end of this report.

Conrad’s Account

The topic of the NetIKX seminar on 26 January 2017 was ‘Information Design – approaches to better communication’. Information Design (ID) is a collection of practices using visual design, clear writing and thinking about human factors, to making information easier to understand – especially information for the general public. Information designers work across a range of types of media, from road signs to government forms, user manuals to transport maps, bank statements to legal contracts, and increasingly information presented through computer interfaces and Web sites.

I was the first speaker, running through a background history and the theoretical underpinnings of ID, and showing examples with a strong visual component. Ruth Miller then took over and focused on the Plain Language side of things. Both Ruth and I have been active around the Information Design Association in the UK (IDA) for 25+ years.

Here, I’m giving only a brief summary of my own presentation; as I had prepared it in written form with illustrations, I’ve thought it best to convert that into a kind of stand-alone essay; you can find it at http://www.conradiator.com/kidmm/netikx-infodesign-conrad.html. Ruth’s contribution, however, is presented below at greater length, as it isn’t represented elsewhere.

Introducing Information Design

In my opening presentation I explained that the awkward label ‘Information Design’ emerged in the late 1970s as a rallying point for a diverse bunch of folk committed to clarity and simplicity in information presentation. That led to the founding of the Information Design Journal, a series of conferences, and organisations such as the IDA. Some people came into this from a graphic design background; some were committed to the simplification of written language. Psychologists, linguists and semioticians have also contributed their insights.

Despite this avowed interdisciplinarity, the ID community has sadly kept aloof from people in information and knowledge management. One of the exceptional people acting as a bridge is Liz Orna, long associated with NetIKX and its predecessor the Aslib IRM Network. In her writing, Liz has long emphasised the important role of ‘information products’ as artefacts designed for conveying knowledge.

Visual examples across the ages

I then conducted a whistle-stop history tour of innovation in making complicated stuff easier to understand through pictorial and typographic means, including:

  • Tables, a surprisingly old way of handling information (reaching way back to Sumeria in about 2500 BCE). My table examples included tide-tables, ‘ready reckoners’, and text in tabular formats.
  • Diagrams/drawings, ranging from more exactingly accurate ones such as anatomical atlases and sea-navigation charts, to line drawings and schematic diagrams which remove unnecessary detail so that they can focus on communicating (for example, how things work).
  • Harry Beck’s London Underground diagram got a special mention, given its iconic status. It is often called a ‘map’ but in reality it is a service network diagram, and this approach to transport information has been copied worldwide.

Harry Beck urderground diagram

  • Charts and graphs including Joseph Priestley’s first timeline, William Playfair’s invention of the line and area chart, and Florence Nightingale’s ‘coxcomb diagrams’ for presenting statistics.
  • Data maps, such as John Snow’s 1854 plot of cholera deaths around the Broad Street pump in Soho.
  • Network diagrams as used to represent links between entities or people, or to explain data flows in a software system.

I also mentioned business forms and questionnaires as an important genre, but I left this topic to Ruth who has more experience with these.

Where did Information Design thinking come from?

The above examples, which I illustrated using pictures, illustrate trends and innovations in the presentation of information. Next I looked at how the quest for clear communication became more conscious of itself, more bolstered with theory, and better organised into communities of practice.

This seems to have happened first in improving the clarity of text. In the 1940s, Rudolf Flesch and Robert Gunning proposed some objective ways of measuring the readability of text, by calculations involving the length of sentences and the average number of syllables per word.

Flesch Readability Chart                                                                Flesch Readability Chart

In the UK, Sir Ernest Gowers formulated a guide to plain English writing to educate civil servants, culminating in the famous book The Complete Plain Words, which is still in print after six decades and a number of revisions.

In the Second World War, the technical sophistication of weapons plus, in Britain, the need to engage the public in war preparedness seem to have been drivers for innovations in technical documentation and the creation of training materials, and the job description ‘Technical Author’ came into being. As this trend in technical documentation continued in the post-War era, technical communicators organised themselves into associations like the STC and ISTC. In the richer industries such as aerospace, technical documentation also pioneered the use of early WYSIWYG computer systems like Xerox Docomenter and Interleaf for document composition.

In 1943, the UK Medical Research Council formed its Applied Psychology Unit in Cambridge, initially to investigate how to help armed forces personnel understand and cope with information under stresful conditions. Post-war, APU researcher Pat Wright went on to investigate factors in text legibility and comprehension; Don Norman contributed to the establishment of Cognitive Science as a discipline, and helped Apple Computer as its first User Experience Architect.

In 1978, NATO sponsored a conference in the Netherlands about human factors and the design of non-electronic information displays; the papers were published as Information Design in 1984. The Information Design Journal was set up in the aftermath of the event and was then the focus for a number of conferences in the UK. As for the IDA, it was launched in 1991.

Some issues and developments

I rounded off my presentation by touching on three issues which have been woven in and out of Information Design practice down the years:

  • Desktop publishing’, which put typesetting control and on-screen design into the hands of graphic designers, was a powerful enabler for information designers in particular.
  • Understanding the reader remains a challenge for anyone who truly seeks to communicate clearly. It’s dangerous to make assumptions about what will make sense to a user community unless you find out about that community. Today there is growing sophistication in using qualitative research methods and even ethnography to inform more effective writing and design.
  • Prototyping and usability testing – making prototypes is easier than before. Testing them with a sample of people representative of the eventual users can provide very useful insights, as Ruth would later illustrate from her own experience.

I closed my section of the meeting by speculating that the realm of information and knowledge management has hitherto tended to be dominated by librarians and like professionals, who focus on curating and organising collections of information resources. I would like there to be more engagement between this group and those actively engaged in designing and creating the information products which Liz Orna has described as having a central role in conveying knowledge between people.

Liz Orna on the chain of communication

I then handed the meeting over to Ruth.

Ruth Miller on plain language

ruth-millerRuth explained that she did not train to be a plain language communicator; she fell into it and found it a perfect match for her personality. Like many people who work on improving communication, she notices things that are odd or confusing in everyday life, and wonders how they could be organised better. She would describe herself as a Simplifier: someone who looks at information and thinks about how to make it easier for people to understand.

More recently, Ruth has had the experience of teaching English to unaccompanied minors, as a volunteer at a refugee camp in Greece.

Plain language is not new. ‘Let thy speech be short, comprehending much in few words,’ it says in Ecclesiasticus (Sirach) (32:8), which dates from about 200 BCE. From Ptolemaic Egypt, we have a letter from a Minister of Finance to a senior civil servant, saying ‘Apollonius to Zeno, greetings. You did right to send the chickpeas to Memphis. Farewell!’ These quotes are from a 1988 pamphlet called ‘Making it Plain: a plea for plain English in the Civil Service’, with a foreword by Margaret Thatcher.

Thatcher promoted plain language writing. Early in her first government she engaged Derek Rayner, former CEO of Marks and Spencer, to commission a series of reports on efficiency in government, the ‘Rayner Reviews’. One of these, Forms into Shape (1981), analysed the use of forms in the Department of Health and Social Security (DHSS), and recommended the setting up of specialist Forms Units in government departments. Ruth would have more to say about forms design later, from her experience inside one of those units.

Ruth showed an illustration from the horticulture manual Flora, Ceres and Pomona by John Rea, beside an excerpt in which Rea says that he ‘has not inserted any of those notorious lies I have frequently found in books of this subject, but in plain English terms, set down the truth in every particular’. This is the earliest use Ruth has found of the phrase ‘plain English’ – it dates from 1665.

When plain language explanation should be unnecessary!

In many circumstances you shouldn’t need an explanation. Ruth showed a photo of a bathroom tap with a square knob set some centimetres to the right of it, from a British hotel. She couldn’t figure out how to make water come out of it. Evidently she wasn’t alone in this: the hotel had added a sign saying ‘Tilt Taps to Operate’ – which only made matters more confusing (the tap does not tilt, and there is only one of it). ‘Turn knob to operate tap’ would have been better – but even then, it’s an example of information as a prosthesis; had the artefact been better designed in the first place, it would not be necessary to help it with an information crutch.

Ruth also showed a photo of a fine mahogany boardroom table she had encountered at a business meeting. It’s useful to have a table on which to place your bag, so you can unpack the things you need for the meeting. On this table was placed a sign, ‘Please Do Not Put Briefcases on Tables as It Damages the Surface’. Ignoring points of dubious grammar, and the strange capitalisation… isn’t it just daft to provide a table you can’t use as a table?

‘If you go away from this meeting with only one thought,’ said Ruth, ‘it should be: think about the whole situation and challenge the need to explain, however clearly, something that is nonsense in the first place.’

Siegel and Gale experience

After working in government service, Ruth moved to the communication consultancy Siegel and Gale. This was an exciting time when computer technology and laser printers were changing how personalised documents such as utility bills and bank statements could be delivered. Now less ‘computerish’ fonts could be used; layouts could be more sophisticated; type size and boldness could be used for emphasis.

Siegel and Gale caused a stir in the 1990s with their redesign of the British Telecom phone bill. This put summary information on the front page, and more detail on follow-on pages; it used simplified language, and logical grouping of items. As a result the number of customer billing enquiries fell by 25%. BT also found that customers paid bills more promptly.

Siegel and Gale once won the Design Effectiveness Awards with a humble Royal Mail redirection form. Before the redesign, that form had an 87% error rate when customers filled it in, costing Royal Mail about £10,000 a week. The redesigned form paid for itself in just 17 days!

Siegel and Gale also moved into the redesign of bank statements. For Barclays, they changed the technical language of ‘Debits’ and ‘Credits’ to ‘Money out’ and ‘Money in’. In other words, name things the way people think about them, in the language they are used to.

Conrad had mentioned ethnographic research in passing; Ruth refers to watching people use things. Once she had worked on a booklet for TalkTalk, to help people set up an Internet router at home. They then embarked on research to see how effective the booklet design had been. What had really helped was the inclusion of photos: this is what’s in the box, this is what it will look like when you have set it up, and so on.

This project did have its moments of comedy. There was a particular router which doubled as a picture frame: you could slip a photo into a slot on the front of it to ‘domesticate’ the thing. Ruth overheard someone telling a friend that she had just about set her router up, and had managed pretty well – but she wasn’t quite finished; now she had to find a photo! (Perhaps they should have added the word ‘optional’?)

Plain language: campaigns for awareness

The case for plain language use was championed within the public sector in Britain, Australia and Canada. In the USA, the lead was taken more by private business. In the US financial sector, they wanted people to understand things like investing. The US Securities and Exchange Commission pressed for consumer agreements to be written in language that people signing up to them would understand.

In the UK, the Plain English Campaign deserves credit for raising awareness and getting the bandwagon rolling. They were and still are a force for good. They were also very clever at marketing. Doesn’t ‘Plain English Campaign’ sounds like a publicly-funded body, or an NGO? In fact, they are a commercial business.

The ‘Crystal Mark’, which the PEC invented, was a brilliant idea and a money-spinner too. Many companies believed that getting a Crystal Mark on one of their documents was a mark of quality, like a kite mark. If you saw a Crystal Mark, the implication was, no-one should have a problem understanding it. But that isn’t necessarily true, partly because PEC is financially motivated to award Crystal Marks, but also because their focus is far too narrowly set on language construction. An over-long and complicated set of Terms and Conditions, set in small and hard-to-read type, would still get a Crystal Mark from the PEC – if they deemed the language to be ‘plain’.

Recent experience

More recently, Ruth has worked freelance, and she showed some small examples of projects which have brought her pleasure. She has enjoyed working with Standard Life, simplifying their policy documents, and materials about investments and pensions. What got them walking along the road to simplification was a letter from a customer who complained:

My degree is only in mechanical engineering. I can understand differential calculus, I can design all the machinery for a sewage treatment works, I can design you a bridge but I cannot understand what my policy is worth.

In the redesigns, they introduced summaries, and contextual notes, and made use of two-colour print. She added: these may be humble documents; but when you do them well, it can actually get noticed, and besides, it improves the quality of people’s lives.

Form and function: lessons from the DHSS experience

Ruth has long enjoyed doing battle with forms. When she was a civil servant, the language used in forms was from the 1950s, and they were very difficult to fill in; no wonder that the launch of the Campaign for Plain English was marked with shredding forms in Trafalgar Square!

Ruth once worked in a unit in a government department (the DHSS); this team had a brief to radically improve such forms. The team included writers and designers, and had a decent budget for research and testing too. They had input from Pat Wright, the applied psychologist Conrad had mentioned, and the Adult Literacy Basic Skills Unit; RNIB providing input about impaired vision. They investigated what trips people up when they try to fill out a form – type size, vocabulary, question sequence, whatever.

The unit was supposed to redesign 150 forms and in the first two years they managed about eight! However, that seemingly slow progress was because the research and testing and analysis was very ‘front loaded’ (it paid dividends later).

With forms, there is sometimes a trade-off between length and complexity. Some forms in her collection are booklets of 28 or even 36 pages! People appear to prefer a long but easy to understand form. Reorganising questions so all you have to do is tick a box is helpful – but it takes space. Clear signposting to take you to the next relevant part of the form is good – and also takes space!

Many forms have an introductory paragraph which tells people how to fill in the form (write clearly, write in block capitals, use a black pen…). However, research shows that hardly anyone reads that bit. In any case, people’s behaviour is not changed by such prompts, so why bother?

If you want to provide guidance as to how to fill out specific parts of a form, provide it at the ‘point of use’ – embed your explanations, and any necessary definitions, right in the questions themselves. An example might be the question: ‘Do you have a partner?’ Then you can clarify with something like ‘By partner we mean someone you are married to, or live with as if you were married to them’.

It’s useful to establish what graphic designers call the grid – a set of rules about how space is to be used on the page to lay out the form. For example, the questions and explanations might be placed in a leftmost column, while the space for answers might span the next two columns. Ruth showed some examples of gridless and chaotic forms, later redesigned according to a grid.

Once upon a time, forms would be made up only of type, plus solid or dotted lines (for example, in letterpress printing of the early 20th century). That has created a set of norms which we don’t have to feel bound to these days. Today, lithographic printing permits the use of tints (printing a light shade of a colour by using a pattern of dots that are too small to be individually distinguished). Tints can help to distinguish which parts of the form are for writing into (with a white plain background) from those parts which ask the questions and provide help (where type is set on a tinted background). A second print colour, if affordable, can also be helpful.

Testing also found that it was very helpful to re-jig questions so they could be answered with tick-boxes. Boxes which are used to determine a ‘yes/no’ condition should follow a ‘yes’ kind of question, as in ‘Tick this box if you are married’.

Some such yes/no questions, if answered in the affirmative, will lead to others. Perhaps controversially, Ruth’s team in the DHSS reversed the usual order so that the ‘No’ tick box came before the ‘Yes’ one: this helped them to lay out the subsidiary questions more clearly. (In an online form, of course, such subsidiary questions can be made to disappear automagically if the answer is ‘No’.)

Ruth mentioned ‘hairy boxes’ – those pesky ones with vertical separators that are intended to guide you to place one letter in each demarcated space. They’ve proved to be a complete disaster. Someone mentioned the US Immigration form for filling out before the plane lands, which has this feature.

That’s not the only problem with that US Immigration form, remarked Ruth. It’s very bad at conveying the relationship between question and response space: people often assume that the space for the answer is the one below the question. Only when they come to the last question do they find that the questions are set below the spaces for answering them.

Signposting is important in complex forms, helping people to skip questions that don’t apply to them (‘If you answered ‘No’ here, go forward to Section C’).

For the benefits claim forms, the DHSS team realised that many claimants don’t have the fine motor skills to write small, so they made more space for the answers – and left a substantially larger space for the signature.

Many forms end at that point, but the DHSS team added a section to tell the form-filler what to do now, what supporting documents to attach, and what would happen subsequently. It helped manage expectations and gave people a sense of the timescale according to which DHSS would respond.

Quick exercise

Ruth got us to work in pairs on an exercise based on the competition which the Plain English Campaign used to set in the pages of the Daily Express. She had multiple copies of three or four real life examples of gobbledygook and invited us to simplify the messages; we wrote our alternatives on the small A4 white-boards which she uses in teaching, called ‘show-me’ boards in the trade, so we could hold them up to compare alternatives across the room.

One of the original offerings read: ‘We would advise that attached herewith is the entry form, which has been duly completed, and would further advise that we should be grateful if you would give consideration to the various different documents to which we have made reference.

One suggested rewording was ‘Please note the documents referred to in this form’; another was ‘Here is the entry form; please note the referenced documents.’ PEC’s original winner was ‘Attached is the completed entry form. Please consider the documents referred to’ – though she personally preferred that ‘Here is’ version. We went through another couple of examples too.

Problem areas which people noted include:

  • use of the subjunctive mood in verbs
  • use of the passive tense in verbs
  • long sentences with multiple clauses

In the wording of contracts, it may be unclear who is meant by ‘we’ and ‘you’  in something  the customer is supposed to sign. Jane Teather said that the company had commissioned the form, they should be ‘we’ and the customer ‘you’.

Something else that occupied us for a few minutes was the changing norms around the use of ‘shall’ versus ‘will’.

Four Cs

Ruth offered four Cs as ideals — Clear, Consistent, Concise and Compelling.

The ‘consistency’ ideal suggests that if you set up a convention in the communication – such as who is ‘we’ and who is ‘you’ in a text, and what something is to be called – you should stick to it. This is defiance of a literary concept of ‘elegant variation’, the idea whereby you ransack the thesaurus in a hunt for synonyms, rather than re-use the original term; that may make for a fine essay, but for these purposes, bin it. Once you have called a spade a spade, stick to it.

In written communications with a broad public, subsidiary clauses and relative clauses are probably confusing and best broken out into separate sentences, said Ruth. Likewise she pronounced a fatwa against parenthesis: anything in brackets or between en dashes. They are not bad English by any means, but you risk confusing the wider audience. In any case, stuff in parenthesis is at risk of being thought as of lesser importance (though you might move ‘bracketed bits’ to the end, she said, which is what I am doing now).

A question was raised, in response to a redesign Ruth showed of transforming a bullet list into a tabular layout, about the implications of using tabular data online for accessibility for blind computer users. My own feeling, confirmed after discussion with others, is that an HTML table will ‘linearise’ nicely when reduced to readable text e.g. for voice synthesis presentation: first the header row will be read, then the first body row, then the next, and so on. However, this isn’t good enough. A table is an inherently visual device which allows the reader pay selective attention to rows and columns. Really, the information should be completely re-organised to make an audio presentation meaningful to a vision-impaired person. (Think about how you would present the information on radio!)

Ruth’s overall approach to making textual information more accessible includes these tips:

  • Look to patterns in the text which can be exploited, for example by reorganising material into bullet lists. If Ruth sees a series of clauses linked by ‘and’, she considers bullet points as an alternative.
  • If a list of bullet points gets excessively long, analyse to see if it can be broken into two shorter lists.
  • Break up large slabs of text; Ruth avoids paragraphs which are more than three or four lines long.

Four mantras

Here are four other thoughts which Ruth offered in the course of the afternoon:

  • ‘Nonsense in plain language is still nonsense!’ – as someone in Standard Life had remarked.
  • Rob Eagleton, an Australian practitioner in plain English: ‘It’s the writer’s responsibility to be clear, not the reader’s responsibility to understand.’
  • ‘Clear writing stems from clear thinking.’
  • ‘Simplicity isn’t simple to do.’ Communicating well is an art, a craft, a skill, and it is not that simple to do well. Because writing is something everybody does daily, it’s tempting to think that everone can do it well. Testing reveals this is not true! There is scope here for learning and for training.

Reactions

We would be interested to hear people’s reactions to this topic. Meanwhile here are some thoughts posted by NetIKX committee member Claire Parry:

  • Given the constraints of a half-day seminar, we inevitably only scratched the surface of this vast topic. Several participants commented afterwards that they would have liked to discuss design issues specific to online forms – maybe a topic for a future seminar?
  • I also wondered how we could take the discussion forward to apply information design principles to the Internet of Things, the need for documentation to be readable by both humans and machines, the ‘mobile-first’ philosophy and the move towards embedding user manuals in products.
  • As these are all areas where there is a clear need for interdisciplinary collaboration, it was encouraging to see participants from both information management and technical communications backgrounds contributing to the seminar and acknowledging our common aims. In an era of ‘alternative facts’ and ‘fake news’, clear and accurate communication is more important than ever.
Posted in Developing and exploiting information and knowledge | 1 Comment