What is Information?

“What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life.’ It is information, words, instructions… If you want to understand life, don’t think about vibrant, throbbing gels and oozes, think about information technology.”

Richard Dawkins
The Information Galaxy Illustration
The Information Galaxy (or a squiggly doodle, depending on your perspective)

Introduction

What is information, and how does it get around?

This post started as an excuse to explore this high-level question and gain a more comprehensive understanding of the expansive concept we call information.

Over the course of writing this post, I went down many rabbit holes, emerging some hours later, dazed and confused but full of wonder. Perspectives on this question ricochet from the most abstract analyses of the philosophy of information to the most mathematical and scientific studies in the bowels of information theory or science. The first part of this question evokes strong responses from the camps of philosophy, metaphysics, mathematics, physics, biology, computer science, and art. “What is information?” simultaneously unites and divides all those who seek answers or to answer it.

It was a delightful topic to write about, what with all those opinions bouncing around at all these intersections. The process was full of opportunities to challenge my assumptions and expand my understanding of the world.

The intent of this post is to communicate the breadth and depth implied by this question. And hopefully, to instill a bit of wonder about the world.

What is Information?

Simple question, right?

Laughing Minion Gif

At least that’s what I first thought.

To start exploring this question, I’ll cover how the concept of information has been acknowledged by science and information theory. Then I’ll provide a brief overview of information theory to a limited extent, and finally, present some parting thoughts on the question.

Why cover science and then information theory?

Increasingly, the physicists and the information theorists are one and the same. The bit is a fundamental particle of a different sort: not just tiny but abstract—a binary digit, a flip-flop, a yes-or-no. It is insubstantial, yet as scientists finally come to understand information, they wonder whether it may be primary: more fundamental than matter itself. They suggest that the bit is the irreducible kernel and that information forms the very core of existence.

Gleick, James. The Information: A History, a Theory, a Flood (pp. 9-10). Knopf Doubleday Publishing Group. Kindle Edition.

Science!

Doctor Who Physics Gif

In 1929, Leo Szilard addressed a paradox that had plagued physicists for half a century. He posited that information could be dispensed in small units (bits) that could counterbalance entropy (simply put, entropy is disorder). In his thought experiment, a new and proverbial solar system of thought was created. (Learn more about the paradox and his thought experiment here.)

In sum – information brings organization and order; it is the counterpart to disorder.

Information, in its connotation in physics, is a measure of order—a universal measure applicable to any structure, any system. It quantifies the instructions that are needed to produce a certain organization. This sense of the word is not too far from the one it once had in old Latin. Informare meant to “form,” to “shape,” to “organize.”

This information flow, not energy per se, is the prime mover of life—that molecular information flowing in circles brings forth the organization we call “organism” and maintains it against the ever-present disorganizing pressures in the physics universe. So viewed, the information circle becomes the unit of life.

Loewenstein, Werner R.. The Touchstone of Life (pp. xv-xvi). Oxford University Press. Kindle Edition.
Great Circle of Life, Lion King Gif

How much information is needed to counterbalance disorder?

The more disorder, the more information is needed to bring order. Any element/system that can be reproduced in many (equivalent) ways is perceived as disorderly and requires more information to bring about order.

Over time, systems shed their information (order) and maximize their entropy (disorder). This phenomenon, while rooted in physics, is known in software engineering; it’s called software entropy. Unless maintainers and builders actively work against the buildup of disorder, as the system is modified and added to over time, the disorder will increase. How can we prevent this? Some folks suggest fixing metaphorical broken windows as an effective preventative measure.

So that’s how information, as a concept, came into being, at least in physics.

Science, out.

Bill Nye Science Gif

Information Theory

Raccoon with an Abacus Gif
Maths

This section will, at an extremely high level, provide a brief history of the concept of information in the realm of information theory, and then outline the basics of information theory.

The Concept of “Information”

This field had a promising start in the 1920s when telegraphs were booming (not literally, of course), and communications infrastructure was rapidly expanding. Due to the ever-increasing importance of reliable transcontinental telegraphs, many aspects of that communication system came under theoretical and applied scrutiny during the telegraph boom. When there was no established field of information theory, these topics were studied at a messy/amazing/unsustainable intersection of engineering, mathematics, and “communication systems.”

Here’s an abridged history:

  • 1924: Harry Nyquist provides a distinction between the actual content of the signal and the information carried within the message. He begins to discuss how a system could be optimized for the transmission of intelligence (his words, not mine). As a part of that, he realized that communication channels have a certain transmission maximum. He didn’t talk about information – just intelligence.
  • 1928: R.V.L. Hartley builds upon Nyquists’ ideas by removing some of the more interpretive/subjective elements (namely, the concern over meaning). He develops mathematical proofs for measuring the flow of intelligence. In his own words, he hoped ” to accomplish…. a quantitative measure whereby the capacities of various systems to transmit information may be compared.” Source.
  • 1948: Claude Shannon, widely regarded as the father of information theory, cites Nyquist’s and Hartley’s papers in his groundbreaking paper, Mathematical Theory of Communication. More below. Note: Shannon made the jump from intelligence to information.

There are two parts to Shannon’s work:

  • Modeling the conceptualization of information and information sources and working from these models:
Diagram of general communication system, per Claude Shannon's 1948 paper
  • Developing theories on the sending of information across the channel, the limits of the amount of information, and noise.

I touch on Shannon’s work a bit more in a future section of this post, but if you want to know all about it, I recommend reading Shannon’s actual paper, this useful summary of his life from the Scientific American, or just giving him a good ole’ Google.

Shannon’s work allowed all communication systems – radio, television, telegraph, etc. – to be unified under one model with common characteristics and problems. Although “Shannon’s model does not cover all aspects of a communication system… in order to develop a precise and useful theory of information, the scope of the theory has [sic] to be restricted” (Source).

Information theory was born.

Lion King Simba birth gif
Simformation Theory is born!

The (Very) Basics of Information Theory

Information theory is a mathematical representation of the conditions and parameters affecting the transmission and processing of information (Encyclopaedia Brittanica).

Information theory deals with three basic concepts:

(a) the measure of source information (the rate at which the source generates the information),

(b) the information capacity of a channel (the maximum rate at which reliable transmission of information is possible over a given channel with an arbitrarily small error), and

(c) coding (a scheme for efficient utilization of the channel capacity for information transfer). These three concepts are tied together through a series of theorems that form the basis of information theory summarized as follows:

If the rate of information from a message-producing source does not exceed the capacity of the communication channel under consideration, then there exists a coding technique such that the information can be sent over the channel with an arbitrarily small frequency of errors, despite the presence of undesirable noise.

Information Theory, Coding and Cryptography by Arijit Saha, Nilotpal Manna, Mandal
Racoon playing a water sprinkler gif
Trying to hold onto a sense of what that meeeeans

That’s dense, so let’s break that down.

Information system illustration

A) Measure of source information

  • “The rate at which the source generates the information.”
  • This is how many envelopes the source produces or how much information is in the message.

B) Information capacity of a channel

  • “The maximum rate at which reliable transmission of information is possible over a given channel with an arbitrarily small error.”
  • How many envelopes can fit into that channel, the speed of the envelope moving through the channel, and the tolerance for small errors in envelope sealing (just an example of an error type)

C) Coding

  • “A scheme for efficiently using the communication channel’s capacity.”
  • The envelope itself, the methodology of fitting the message in the envelope, the way information is encoded as a message, etc.

Just a bit more about Shannon’s paper

Shannon also introduced two other concepts about information in the context of a communication system:

  1. Information is uncertainty. “More specifically, if a piece of information we are interested in is deterministic, then it has no value at all because it is already known with no uncertainty. From this point of view…. the continuous transmission of a still picture on a television broadcast channel is superfluous. Consequently, an information source is naturally modeled as a random variable or a random process, and probability is employed to develop the theory of information” (Source).
  2. Information to be transmitted is digital. “This means that the information source should first be converted into a stream of 0’s and 1’s called bits,and the remaining task is to deliver these bits to the receiver correctly with no reference to their actual meaning” (Source).

And he proved two theorems, which are highly related to the a), b), and c) above.

1. The source coding theorem introduces entropy as the fundamental measure of information which characterizes the minimum rate of a source code representing an information source essentially free of error. The source coding theorem is the theoretical basis for lossless data compression.

2. The second theorem, called the channel coding theorem, concerns communication through a noisy channel. It was shown that associated with every noisy channel is a parameter, called the capacity, which is strictly positive except for very special channels, such that information can be communicated reliably through the channel as long as the information rate is less than the capacity. These two theorems, which give fundamental limits in point-to-point communication, are the two most important results in information theory.

Information Theory and Network Coding by Raymond W. Yeung

To learn more, I highly recommend picking up the book cited above.

So…

"Wait, what was the question" gif

What is information?

We’ve covered how “information” was discovered as a concept in both science and information theory, how information theory was established, and what concepts information theory touches. But my original question still stands.

Well. Shannon was certainly cautious about answering this question. Here’s what he had to say about it in the late 1940s.

The word ‘information’ has been given different meanings by various writers in the general field of information theory. It is likely that at least a number of these will prove sufficiently useful in certain applications to deserve further study and permanent recognition.

It is hardly to be expected that a single concept of information would satisfactorily account for the numerous possible applications of this general field.

The Lattice Theory of Information by C. Shannon

It’s an eloquent way of saying:

Nope Nope Nope Gif

But hey – that was in 1940-something. Things have probably changed, right?

Work on the concept of information is still at that lamentable stage when disagreement affects even the way in which the problems themselves are provisionally phrased and framed.

Information: A Very Short Introduction, Luciano Floridi, 2010

And:

What is information? The question has received many answers in different fields. Unsurprisingly, several surveys do not even converge on a single, unified definition of information (see for example Braman [1989], Losee [1997], Machlup and Mansfield [1983], Debons and Cameron [1975], Larson and Debons [1983]).

Source: https://plato.stanford.edu/entries/information-semantic/#2
Disgruntled Tangled Frog Gif
Unamused

It’s time for a bit of a leap of faith.

I’m going to make an assumption…

…That whoever is reading this wants a f**king answer to this question, if you’ve made it this far.

We’re going to branch down from “information” to “data”, and explain it from the perspective of the “semantic” philosophical theory. I don’t really know what that means either, but assume that lots of people agree and disagree with the direction I’m taking this, and it’s not a black & white answer.

Wayfinding map through semantic information theory - don't worry, we only go two levels into it
A wayfinding map. Source.

The General Definition of Information (GDI)

This is a controversial and highly subjective answer to a seemingly simple question.

The General Definition of Information (GDI) in terms of data + meaning. Various fields have adopted the GDI, generally, those that consider data and information to be more concrete entities, e.g., information science.

The General Definition of Information (GDI):
x is an instance of information, understood as semantic content, if and only if:

(GDI.1) x consists of one or more data;

(GDI.2) the data in x are well-formed;

(GDI.3) the well-formed data in x are meaningful.

Source: https://plato.stanford.edu/entries/information-semantic/#1

Let’s break down each of the words in italics.

Data

This gets weird and metaphysical, real fast.

Again, sticking to the GDI’s accepted definition a singular data point (a datum): a datum is a fact regarding some difference or lack of uniformity within some context.

For example, the top-level domain (TLD) of my website is .dev. True story. This is a datum (fact) about my website (context). There are many top-level domains, but even not knowing that fact, the existence of .dev suggests there may be non-.dev possibilities.

There’s loads more to the definition, but I’m not going to go into it here. If you want to learn more, this is the place to go.

Well-Formed

This means that the data are clustered together correctly, according to the rules (syntax) that govern the chosen system, code, or language. Syntax is what determines the form, construction, composition, or structuring of something.

For example, in a tree graph, a parent node will always appear above a child node. A child node will always appear below the parent node. This is syntax.

Parent node, child node illustration

Meaningful

The data adhere to the semantics (meaning) of a system.

In the graph, we understand that the child nodes are sub-elements of the parent node. That relationship is due to one or multiple shared characteristics, functions, or properties. This is the semantic structure of a tree graph.

Becoming Information

“We can see now that information is what our world runs on: the blood and the fuel, the vital principle. It pervades the sciences from top to bottom, transforming every branch of knowledge. Information theory began as a bridge from mathematics to electrical engineering and from there to computing. What English speakers call “computer science” Europeans have known as informatique , informatica , and Informatik.”

The Information: A History, a Theory, a Flood by James Gleick

Put one or more well-formed and meaningful datum together, and what have you got?

I can communicate something like this – a rudimentary diagram of the Domain Name System namespace! Information!

Namespace illustration
Bippity Boppity Boo Cinderella Gif

Final Thoughts

Hitchhiker's Guide to the Galaxy Gif - the meaning of life is 42
The meaning of life, per the Hitchhiker’s Guide to the Galaxy

The Information Lifecycle (The Circle of Life)

Once information (data + meaning) could be said to exist, people or systems need to gain access to it via communication systems. Communication systems shape and are shaped by the information lifecycle. But the communication system is just part of the lifecycle of information:

Information lifecycle illustration
The Information Lifecycle, modeled after a diagram in Floridi, Luciano. Information: A Very Short Introduction (Very Short Introductions) (p. 5). OUP Oxford. Kindle Edition.

The information lifecycle and the contents of this diagram deserves a post of its own. In essence, we’ve only really looked at a portion of this cycle.

The Information Galaxy

As I read and wrote more about this topic, the image I kept coming back to was one of a galaxy. While the metaphor to astronomy is, so to speak, quite vast; this was more focused on the mindset I adopted while exploring this topic.

We know things about the galaxy. We don’t know things about the galaxy. There are observed phenomena and rules created to explain them. And there are observed or unobserved phenomena that we haven’t discovered or been able to explain yet. This holds true for the concept of information. We know some things about information and how it gets around. And then we don’t know some things about information and how it gets around. There are unsolved puzzles and undiscovered frontiers.

When considering information, we encounter the far reaches of human knowledge and understanding. Information binds us together and creates boundaries that separate in equal measure.

One thing is for sure, there’s always more to explore.

Hitchhiker's Guide to the Galaxy dolphin gif

Resources

These resources are mostly all linked throughout this post, but some of these are more so additional reading for the curious or some of my favorite materials about the topic.

  • Information Theory and Network Coding 2nd Ed. by Raymond W. Yeung
  • Stanford Encylopedia of Philosophy, Information
  • Information Theory, Coding and Cryptography by Arijit Saha, Nilotpal Manna, Mandal
  • Information: A Theory, A History, and Flood James Gleick
  • Information: A Very Brief Introduction by Luciano Floridi
  • The Touchstone of Life, Werner R. Loewenstein

art and code (3/3)

Art and Code Illustration
original artwork

preface

  1. This is the last of three posts about art and code, specifically about the similarities in chronological flow/process. I recommend reading the first post and second post prior to this one.
  2. These are subjective views/opinions/not facts and are from the perspective of a novice programmer and visual artist.
  3. This topic deserves a much longer extrapolation and could easily become a book. These posts will be fairly concise.
  4. This preface appears at the beginning of each post in the series.
  5. I am passionate about this topic and believe there are far more similarities than differences in artistic and technical pursuits. I am, overall, at a loss as to why the two generally are held up in contrast to each other.

end

It’s over, or almost over. The thing will be done soon.

For me, this part of the art-making and coding process is the most nerve-wracking. This is when I have to wrap things up, make them final, and commit to (sometimes temporary) permanence. And once it’s done, I put it out there for folks to look at, use, wish there was more to, or find faults with.

But at the of the day (and the project), the bright side of not working on this thing anymore outweighs any of that.

This part of the process often contains the following actions or thoughts.

final touches

The final touches for both art and code usually involve tying up loose ends and cleaning up after the mess of creative flow. Because it can be a messy business. Below are examples of the finishing touches, all the way to actually being finished.

  • Linting | cleaning up the area and taking care of used materials. Tying up loose ends.
  • Unit tests (I mean, these should probably be written already) | preserving the work (spraying, coating), matting and framing. Making sure it can withstand at least some stressors.
  • Documentation | a description/title. Ensuring the work is comprehensible to others.
  • Making a PR or publishing the project | hanging it on a wall. Putting it out there.
  • Peer review | hanging it on a wall in a gallery that the whole world can walk into. Putting it out there.

time

With both art and code, one has to actively consider a number of factors through the lens of time. Most notably, time decay and environmental stressors.

Once the thing is created, unless the creator takes steps to ameliorate this, time decay inevitably sets in. Things, once created, capture techniques and technologies that exist or were popular at that point in time.

  • Languages, syntax of languages, tools, that version of x | colors, brush strokes, shapes, methods/schools

Here’s an architecture example. (Forgive me architects, this is not a perfectly 1:1 analogy and I don’t intend to communicate that these buildings are equivalent in their architectural significance.)

Alcázar of Segovia, Segovia, Spain. Source
Walt Disney Concert Hall, Los Angeles, CA. Source

They both need to be maintained. They both have different maintenance needs. They both have their charms. They are maybe not so charming to some people. Either way, they need to be taken care of. This is as it is with art and code.

Time decay involves environmental stressors. Examples:

  • If the art piece is exposed to the elements, its materials will degrade.
  • If the coding project is not kept up to date, security vulnerabilities crop up and as the whole world of technology keeps moving forward, things can start breaking.
  • The public uses it | the public sees it. Yikes. Wear and tear occurs. Users and viewers experience it in ways they perhaps were never intended to experience it.

reflection

This is my favorite part of the painful process.

It is an opportunity to reflect and effectively do a postmortem on one’s own process. This is when we notice areas for improvements and have a chance to learn from ourselves and the effort expended on the creation.

This is also a delicate part of the process, because it is far too easy to start comparing. The final product to the original idea, the final product to other people’s work, etc.

Ideally, we come out of the reflection period with a general sense of accomplishment. Here are some techniques I’ve used to reflect productively:

  • Pretend I am mentoring the 8-year old version of myself and openly self-dialogue about how things went (hey, not for everybody, but it works for me)
  • Timebox your reflection (e.g. 80 minutes)
  • Ask “what really worked for me?”
  • Ask “what really didn’t work for me?”
  • Ask “what will I bring forward into my next art or code effort that will help me enjoy the process more?”

I have baked this reflection phase into each of my coding projects. I list what I have learned in the README of each project I post on Github. I blog about my projects. It’s all part of documenting the things I learn, so I don’t have to learn them the extra-hard way (again). This is a powerful opportunity to collect data on oneself; data that can be leveraged to gain greater heights in the future.

feelings

In my experience, there are two big feelings at this part of the process.

Crushing perfectionism and self-doubt typically go hand and hand and typically are manifestations of imposter syndrome. Learn more about Imposter Syndrome from the APA.

Impatience – I’m done. When will it be over?

Feelings happen. They have has much power as I allow them to have over this part of the process.

the future

“To iterate or not to iterate?” – that is the question.

  • Do you want to do it again, only better?
  • Are there ways you could build off of this?

Too soon.

Maybe later.

art and code (2/3)

Art and Code Illustration

preface

  1. This is the second of three posts about art and code, specifically about the similarities in chronological flow/process. I recommend reading the first post before this post: link.
  2. These are subjective views/opinions/not facts and are from the perspective of a novice programmer and visual artist.
  3. This topic deserves a much longer extrapolation and could easily become a book. These posts will be fairly concise.
  4. This preface appears at the beginning of each post in the series.
  5. I am passionate about this topic and believe there are far more similarities than differences in artistic and technical pursuits. I am, overall, at a loss as to why the two generally are held up in contrast to each other.

middle

After one prepares, ideates, and writes the first line of code – what happens? What can happen in the middle of making art and code (or a combination of both)?

Since the middle is most of it, there’s a lot that goes into it. In my experience generally contains one or more of the elements listed below. They may occur in self-contained sequence, all at once, or not at all. It just depends.

flow

The act of creating art or code requires periods of intense concentration, during which one does the creating. During these periods, it is possible to enter a state of flow.

What is flow? It is a well-documented and well-researched phenomenon. 🤔

Just kidding. Flow is… well. I’ll let others explain it better than I could. Please prepare for longer blocks of text as I try to stitch together the ideas of flow and creativity, which in itself, as been the topic of numerous books over the years.)

Technically, flow is defined as an “optimal state of consciousness where we feel our best and perform our best…” … In flow, concentration becomes so laser-focused that everything else falls away. Action and awareness merge. Our sense of self and our sense of self consciousness completely disappear. Time dilates—meaning it slows down (like the freeze frame of a car crash) or speeds up (and five hours pass by in five minutes). And throughout, all aspects of performance are incredibly heightened—and that includes creative performance.

Flow States and Creativity, Psychology Today

This when things are just. Great. Things stream from one’s fingertips. Interestingly enough, this is also when part of the brain may temporarily loosen its hold on its control of our reality.

Flow is also [theorized to be] caused by “transient hypofrontality”— the temporary deactivation of the prefrontal cortex. The PFC is the part of our brain that houses most of our higher cognitive function. Why does our sense of self disappear in flow? Because self is generated by large portions of the prefrontal cortex and with large swatches of this area no longer open for business, that sense vanishes completely.

Flow States and Creativity, Psychology Today

In other words, flow… could be an altered state of consciousness?

Woah, man.

In a 2008 study published in the journal PLOS, Charles Limb, an otolaryngologist at the University of California, San Francisco and accomplished jazz saxophonist, and Allen Braun, a speech researcher at the National Institutes of Health, designed a clever way to observe creative expression in the brain: an fMRI machine with a specially made musical keyboard. The two men recruited six professional jazz musicians for the study; while in the fMRI, the participants performed musical exercises ranging from a memorized scale to a fully improvised piece of music.

Observing the musicians’ brain activity as they performed each task, Limb and Braun found that when their subjects improvised, a region called the dorsolateral prefrontal cortex (DLPFC) became less active. Like a neural mother hen, the DLPFC is connected to planning, inhibition, and self-censorship; its deactivation has been suggested to play a role in altered states of consciousness such as daydreaming, meditation, and REM sleep. (A separate imaging study published in the journal Nature in 2012 found a similar lulling of the DLPFC during freestyle rap.) This pattern of brain activity, Limb and Braun wrote, may be “intrinsic to the creative process,” which “can apparently occur outside of conscious awareness and beyond volitional control.”

The Driving Principles Behind Creativity, from The Atlantic

Whew, okay. That was a lot to digest.

There’s so much more I don’t want to touch on here. Here are a few resources if you want to dig deeper into this topic:

TLDR; In creating art or code, it is a possible to achieve a state of being where nothing else matters except the creation process. It is theorized that flow is caused by/linked to part of the brain sort of… turning off.

ideas and discoveries

In art and code, ideas and discoveries are closely related.

  • Idea: oh – I could do that?! When one thinks of something new to add.
  • Discovery: ah – I made that?! When one realizes they’ve already created part or all of something, usually unintended. Could be mistakes or happy accidents, etc.

Either way, in both art and code, you have to decide what to do with it. This additive experience can be tricky and I certainly urge caution in this stage, because it can so easily lead to…

overworking the canvas

Some folks in visual art call it overworking the canvas. Some folks in commercial endeavors call it scope creep. Either way, it happens when we get carried away.

No, not spirited away.

Carried away.

In essence, we add more elements, to the detriment of the existing elements. Overworking the canvas and scope creep both degrade the overall quality of the creation. If we’re not careful, we’ll buckle under the weight, and then…

How do we know when it’s happening?

First, watch out for pennies.

There is a phenomenon that producers at Pixar call “the beautifully shaded penny.” It refers to the fact that artists who work on our films care so much about every detail that they will sometimes spend days or weeks crafting… “the equivalent of a penny on a nightstand that you’ll never see.”

Catmull, Ed. Creativity, Inc. . Random House Publishing Group. Kindle Edition.

Pennies are tiny details that don’t really contribute to what the creation is intended to communicate or express.

Second, watch out for too many ideas. This is when, if this is happening, the act of creating may involve a sense of claustrophobia. When we try and squish faaaaar too much into this stage of the creation. No squishing. Maybe you can add them later, but not now; I recommend, when in doubt, writing down ideas and revisiting at a later date (maybe the next day).

What can one do to prevent scope creep/overworking the canvas, in both art and code?

  • Constant vigilance.
  • Notice how you feel when you sit down to work, and when you get up to take a break. Are you dreading your project? Do you feel like it’s just too complicated?
  • Impose some limits, e.g. time.

Unless you impose limits, people will always justify spending more time and more money by saying, “We’re just trying to make a better movie.” This occurs not because people are greedy or wasteful but because they care about their particular part of the film and don’t necessarily have a clear view of how it fits into the whole. They believe that investing more is the only way to succeed… Limits force us to rethink how we are working and push us to new heights of creativity.

Catmull, Ed. Creativity, Inc. . Random House Publishing Group. Kindle Edition.

feelings!

In my experience, these usually take one of two forms.

Absolute confidence.

… or withering self-doubt.

That’s it, really. Moving on.

questions

Questions can come during pauses and the process of creating, and can range from and to all the below, and more:

  • Am I on track?
  • Does this function the way its supposed to/is it communicating what it is supposed to?
  • How is it supposed to function/what is it intended to communicate?
  • Why am I doing this?
  • Who am I?
  • Why?
  • …What?!
  • Should I keep going?
  • When will this end?

Questions are an expected element of the process. It’s more important to figure out how to field them in a way that is sustainable and kind towards the creation and the creator.

breaks

Sometimes, it’s just too much and I have to take a break. Breaks could be 20 minutes, 2 hours, a few days, a few weeks, a couple of months, or, in some cases, a couple of years.

Speaking of breaks. It’s time for one of those.

To be continued.