[EDIT: re-watching the animation, I think it's too slow. I'll make future animations around ~2x faster.]

Four HD animations (in one video) of color growths generated from my Python script color_growth.py (see http://s.earthbound.io/colorGrowth). The last stops near completion because at this writing that animation is not complete. This post starts with stills of the completed renders, then links to (or includes?) a YouTube upload of it. If you're seeing this post syndicated, you may have to look at the original post to see the video.

YouTube video:

In batch rendering these animations I found that the renders proceeded far too slowly for my wants (days for one render), even though someone sped up my script a lot: https://github.com/earthbound19/_ebDev/pull/21. Also, the resulting animations had wonky perceptual speed (faster at start, slowing toward middle and end), so I updated the script to overcome that.

I overcame that by adding a –RAMP_UP_SAVE_EVERY_N option. With this option enabled, instead of it saving animation frames every N new painted coordinates, it ramps the number of painted coordinates to wait (before saving a new animation frame) up over time, so that the new rendered area each frame is similar to dragging a selection rectangle from one corner to the other of the canvas. This causes the animation to be perceptually more linear in every growth vector, though technically it's non-linear (and speeding up, or waiting longer between each rendered frame). The result is that renders happen much faster (as fewer frames are saved), and the animation speed seems constant (it no longer seems to slow toward the middle and end. In fact, as it approaches filling corners it seems to race toward them, which is a bit funny and I like it. It still takes a night to render two or three of these, but that's much better than days for one.

(I want to try faster Python interpreters / C transpilers, or a wholesale C port, to see if anything can speed it up much more dramatically.)

For HD animations these have relatively very small file names (only many megabytes, instead of hundreds or thousands of megabytes). I believe it's because video compressors exploit the fact of parts of an image remaining the same, which is always true for an increasingly large _and_ diminishing area in these animations.

Color Growth 2019-10-04 22-46-35 c326e1
Color Growth 2019-10-04 22-46-35 c326e1
Color Growth 2019-10-04 22-46-35 c326e1
Color Growth 2019-10-04 22-46-35 c326e1

YouTube video publication of animation: https://youtu.be/9TVgyB-yYqE

This is via cellular automata from a Python language script I wrote and which another programmer improved. It simulates imaginary bacteria that leave color-mutating waste as they colonize. I posted about this script with other example output in more detail earlier, here:

Color growth from virtual bacteria (generative)

The python script which generates this and virtually infinite varieties of works like it is at: http://s.earthbound.io/colorGrowth

Content of source settings file 2019_10_04__22_46_35__c326e1_colorGrowth-Py.cgp:

–WIDTH 1920 –HEIGHT 1080 -a 175 -q 2 –RECLAIM_ORPHANS 1 -n 1 –RSHIFT 5 -b [252,251,201] -c [252,251,201] –BORDER_BLEND True –TILEABLE False –STOP_AT_PERCENT 1 –RANDOM_SEED 2005294276 –GROWTH_CLIP (0,5) –SAVE_PRESET True

This work and the video, I dedicate to the Public Domain.

Done in the very-tacky-named program #procreate, I think. I am never going to say the name of that program without pointing out that it's tacky. Like, extremely tacky. Like how on earth could anyone have thought that name was going to help anything.

I have an ambition to number all my art works, but my backlog of unpublished work is so extensive and that ambition so far out of sight, that you're getting titles from the file names, O Viewer. Unless or until I come back and rename this including a number. Because the only other thing I would do is give abstract works nonsense names that mock postmodernism. The Eye of the Fish is a Balm to the Virulent Saliva of My Soul. Unquenchable Daggers Frame the Fierceness of My Enemy. Voluptuous Horrors Infiltrate My Sexless Madness. Quantitatively Eased Redistributionism Elevates the Indecisive Consciousness of Capitalism. AD NAUSEUM.

Mud Bones, AI-assisted art

My toddler once referred to wood chips as "mud bones," which is where I got the title for this abstract work. (And it's an idea that makes me think. It's maybe not wrong.) This is an output from http://www.thisartworkdoesnotexist.com, which presents AI (GAN)-generated works that are necessarily in the Public Domain. I modified it with recoloring, blending of chalk, and painterly settings, and custom wood bark-like and custom noise alpha/blend layering in Photoshop.

Mud Bones, AI-assisted art

By Small and Simple Things v0.9.10 source code as an image
By Small and Simple Things v1.2.0 source code as an image

These images and this animation (there may only be one image if you're reading a syndicated post) are ways of representing snapshots and evolution of source code. The source code of the second image is version 1.2.0 of a Processing script (or program or code) which produces generative art. At this writing, a museum install of that generative-art-producing-program is spamming twitter to death (via twitter.com/earthbound19bot) every time a user interacts with it. The generative art is entitled "By Small and Simple Things" (is twitter overwhelmed) generative art (see earthbound.io/blog/by-small-and-simple-things-digital-generative/).

How did I represent the source code of a generative art program as an image? There are ways. Another word for creating images from arbitrary data is "data bending," meaning taking data intended for one purpose and using it or representing it via other common ways of using data. One form of data bending is text to image; that's what I do here.

But the ways to represent code as a "data bent" image which I found when I googled it, I didn't like, so I made my own.

The approach I don't like is to take every three bytes (every 8 zeros or ones) in a source and turn them into RGB values (three values from 0 to 255 for Red, Green and Blue–the color components of almost all digital images you ever see on any screen). Conceptually that doesn't model anything about the data as an image other than confused randomness, and aesthetically, it mostly makes random garish colors (I go into the problems of random RGB colors in this post: earthbound.io/blog/superior-color-sort-with-ciecam02-python/).

A way I like better to model arbitrary data as an image is to map the source data bytes into _one channel_ of RGB, where that one channel fluctuates but the others don't. This has the effect of gauging low and high data points by color intensity or limited variation. In these data bent images here, the green and blue values don't change, but the red ones do. Green is zero, blue is full, and the changes in the source data (mapped to red) make the blue alternate from blue (all blue, no red) to violet (all blue, all red).

My custom script that maps data to an image creates a PPM image from any source data (PPM is a text file that describes pixels). The PPM can be converted to other formats by many image tools (including command line tools and photoshop). This data to image script is over here: github.com/earthbound19/_ebDev/blob/master/scripts/imgAndVideo/data_bend_2PPMglitchArt00padded.sh

Again, the first image here (or maybe not if you're reading a syndicated copy of this post) is the first version of By Small and Simple Things. The second image is from the latest (at this writing). The animation is everything in between, assembled via this other script over here: github.com/earthbound19/_ebArt/blob/master/recipes/mkDataBentAnim.sh

 

To generate random irregular geometry like in these images (for brainstorming art), 1) install Processing http://processing.org/download and 2) download this script I wrote for it https://github.com/earthbound19/_ebDev/blob/master/processing/by_me/rnd_irregular_geometry_gen/rnd_irregular_geometry_gen.pde, then 3) press the "play" (triangle/run) button. It generates and saves pngs and svgs as fast as it can make them. Press the square (stop) button to stop the madness. I dedicate this Processing script and all the images I host generated by it to the Public Domain. The first two images here (you may only see one image if you read a syndication of this post) are tear or contact (many images) sheets from v1.9.16 of the script. Search URL to bring up galleries of output from this script: http://earthbound.io/q/search.php?search=1&query=rnd_irregular_geometry_gen

You probably can't reasonably copyright immediate output from this script, as anyone else can generate the same thing via the same script if they use the same random seed. But you can copyright modifications you make to the output.

Continue reading

[Syndicated post–if you don't see multiple images in this, open the given archival URL to the original post to see more images. Not sure I've figured out how to syndicate gallery posts yet..]

Last night I threw wonky parameters at version 1.6.1 of this work:

https://earthbound.io/blog/by-small-and-simple-things-digital-generative/

–which at this writing is in a museum, and which I have updated since to include shapes other than circles).

These images and a pending video result.

Continue reading

What happens if virtual bacteria emit color-mutating waste as they colonize? This 16 megapixel thing happens.

[Later edit: and many other things. I have done many renders from this script, and evolved its functionality over time.]

2019_10_04__16megapixels__bbeb28_colorGrowth-Py.png
2019_10_04__16megapixels__bbeb28_colorGrowth-Py.png

Inspired by this computer generated contemporary art post (and after I got the script to work and posted it here), I wondered what the visual result would be from an algorithm like this:

– paint a canvas with a base color
– pick a random coordinate on it
– mutate the color at that coordinate a bit
– randomly walk in any direction
– mutate a bit from the previous color, then drop that color there
– repeat (but don't repeat on already used coordinates)
– if all adjacent coordinates have been colored, pick a new random coordinate on the canvas [later versions of the script, which has evolved over time: OR DIE]
– repeat (this is less necessary if the virtual bacteria colonize: ) [OR DON'T, as other "living" coordinates will repeat the process
– [Later script versions: activate orphan coordinates that no bacteria ever reached, and start the process with them.]

Then I wondered what happens if the bacteria duplicate–if they create mutated copies of themselves which repeat the same thing, so that you get spreading colonies of color-pooping bacteria.

I got a python script working that accomplished this, and while with great patience it produced amazing output, I was frustrated with the inefficiency of it (a high resolution render took a day), and wondered how to make it faster.

Someone with the username "scribblemaniac" at github apparently took notice of image posts I made linking to this script, and they figured out how to speed it up by many orders of magnitude, and opened a pull request with a new version of the script. (They also added features. And used a new file name. I merged the request.) [Edit: I later merged their script over the original name, and copied my original script to color_growth_v1.py.] The above image is from the new version. It took ~7 minutes to render. The old version would have taken maybe 2 days. (If the link to the new version goes bad it's because I tested and integrated or copied the new version over the old file name).

In a compiled language, it might be much faster.

I did all this unaware that someone else won a "code golf" challenge by coming up with the same concept, except using colors differently. (There are all kinds of gorgeous generative art results in various answers there!–go have fun and get lost in them!) Their source code is down and forsaken apparently, but someone in the comments describes speeding up the process in several languages and ultimately making a lighting fast C++ program, the source of which is over here. Breathtaking output examples are over here. Their purpose is slightly different: use all colors in the RGB color space. Their source could probably be tweaked to use all colors from a list.

Here are other outputs from the program (which might now show up in syndicated posts–look up the original post URL given.)

2019_10_04__16_49_47__ca6505_colorGrowth-Py
color growth script output
2019_10_04__17_57_32__755c0c_colorGrowth-Py
color growth script output
2019_10_04__17_59_22__989252_colorGrowth-Py
color growth script output
color growth script output
color growth script output
color growth script output with default settings but high resolution
color growth script output with default settings but high resolution
color growth script output + high res
color growth script output + high res

These are from randomly chosen RGB colors, which, as I went into in another post, tend to produce horrible color combinations. Le sigh, random pick from CIECAM02 space might be awesome..

I dedicate all the images in this post to the Public Domain.

BSaST v0.9.13 seed 1713832960 frame 133

I wrote a script in the Processing language which randomly generates colored, nested circles on a grid akin to my cousin Daniel Bartholomew's work of the same title. When the Processing script runs, it animates the circles, and if you tap on them, their color animates. I entered it in the Springville Museum of Art's 34th Spiritual and Religious Art of Utah Contest (if it makes it into the show, it will be displayed on a large kiosk). [2019-10-04 UPDATE: This work made it into the show! It was on display at the Springville Museum of Art, October 16, 2019 – January 15, 2020.] Here is the artist statement:

"..by small and simple things are great things brought to pass.." -Alma 37:6

Tap or swipe circles and watch what happens!

Just like your interaction changes this work, I believe that God interferes with reality–sometimes to dazzling effect. I believe that mere existence is amazing besides, or if not, filled with promise.

Images you interact with are "tweeted" @earthbound19bot (Twitter social media).

I coded this in the Processing language with Daniel Bartholomew's support and input. It imitates his original pen and marker works of the same title, with animation, and generating any of about 4.3 billion possible variations at intervals.

BSaST v0.9.13 seed 1713832960 frame 133

I dedicate all these images to the Public Domain. I can literally make 4.3 billion other ones if anyone "steals" these. [UPDATE 2: The kiosk saved as many user-generated works from interactions with it as it could, and I've archived them in my "firehose" gallery here.]

Continue reading

[UPDATE: there's a lot more to light and color science than I perhaps inaccurately get at in this post. Also, try color transforms and comparisons (if the latter is possible?) in Oklab.]

It turns out that all of the digital color models in wide use are often bad for figuring out which of any two colors are "nearest," according to humans.

Sometime in my web meanderings, I stumbled on information about the CIECAM02 color model (and space), including a Python library that uses it and a (gee-wow astonishing at what it can do with color) free Photoshop-compatible plugin that manipulates images in that space. [EDIT 2020-10-07: link to that plugin down and I can't find the plugin on the open web anymore. Here's a link to my own copy of it (in a .zip archive)] If you do color adjustments on images using an application that's compatible with Photoshop plugins (a lot of programs are), go get and install that plugin now! Also: a CIECAM02 color space browser app (alas, Windows only it seems?).

I wrote a Python script that uses that library to sort any list of RGB colors (expressed in hex) so that every color has the colors most similar to it next to it. (Figuring out an algorithm that does this broke my brain–I guess in a good way.) (I also wrote a bash script that runs it against all .hexplt files (a palette file format which is one RGB hex color per line) in a directory.)

The results are better than any other color sorting I've found, possibly better than what very perceptive humans could accomplish with complicated arrays of color.

Here's an image of Prismacolor marker colors, in the order that results from sorting by this script (the order is left to right, top to bottom) :

Prismacolor marker colors, sorted by nearest perceptual
Prismacolor marker colors, sorted by nearest perceptual

For before/after comparison, here is an image from the same palette, but randomly sorted; the script can turn this ordering of the palette into the above much more contiguous-appearing:

Prismacolor marker set colors, random order
Prismacolor marker set colors, random order

(It's astonishing, but it seems like any color in that palette looks good with any other color in it, despite that the palette comprises every basic hue, and many grays and some browns. They know what they are doing at Prismacolor. I got this palette from my cousin Daniel Bartholomew, who uses those colors in his art, which you may see over here and here.)

Some other palettes which I updated by sorting them with this script are on display in my GitHub repo of collected color palettes.

Here is another before and after comparison of 250 randomly generated RGB colors sorted by this script. You might correctly guess from this that random color generation in the RGB space often produces garish color arrays. I wonder whether random color generation somehow done in a model more aligned with human perception (like CIECAM02) would produce more pleasing results.

250 randomly generated RGB colors
250 randomly generated RGB colors
250 randomly generated RGB colors, sorted in CIECAM02 color space
250 randomly generated RGB colors, sorted in CIECAM02 color space

See how it has impressive runs of colors very near each other, including by tint or shade, and good compromises when colors aren't near, with colors that are perceptually further from everything at the end. Also notice that darker and lighter shades of the same hue tend to go in separate lighter/darker runs–with colors that well interpolate into those runs in between!–instead of having lights and darks in the same run, where the higher difference of tint/shade would introduce a discontiguous aspect.

Tangent: in RGB space, I tested a theory that a collection of colors which add (or subtract!) to gray will generally be a pleasing combination of colors–and found this to be often true. I would like to test this theory in the CIECAM02 color space. I'd also like to test the theory that colors randomly generated in the CIECAM02 space will generally be more pleasing alone and together (regardless of whether they were conceived as combining to form gray).

I really can't have those as the last images in this post. Here is a favorite palette.

Lake Bonnevile Desert Colors
Lake Bonnevile Desert Colors

Here's the URL to that palette (in my palette repository).

[Edit 2020-10-07: I had renamed or moved several things I linked to from this post, which broke links. I corrected the links after a reader kindly requested to know where things had gone.]