My custom Filter Forge filter that helped make this image is incredibly inefficient, so I publish it only at a folder in a git repo (link).

I'm using it as an alpha mask in art I'm updating. I give the image to the public domain. Original (see pub. source -> open from thumbnail) is blown up to 3442 x 1936 pixels.

This is output from an image bomber I coded in Processing, worked up in some impressionist and book illustrative-style presets in Dynamic Auto-Painter Pro (a program that tries to make painterly images from any source image). I then did some layering trickery in Photoshop to blend the styles. The sources for the image bomber were circles in 24 shades of gray aligned to human perception of light to dark (white to black), with some random sizing, squishing, stretching and rotating (which is what the image bomber does)

The purpose of images like this, for me, besides being cool by themselves, is to use them as transparency (alpha) layers for either effect or image layers in image editing programs. For alphas, white areas are opaque and black areas show through.

This is my original work and I dedicate it to the Public Domain. Original image size (see syndication source, or if you're looking at the syndication source, click the thumbnail for the full image) : 3200 x 2392

By Small and Simple Things v0.9.10 source code as an image
By Small and Simple Things v1.2.0 source code as an image

These images and this animation (there may only be one image if you're reading a syndicated post) are ways of representing snapshots and evolution of source code. The source code of the second image is version 1.2.0 of a Processing script (or program or code) which produces generative art. At this writing, a museum install of that generative-art-producing-program is spamming twitter to death (via twitter.com/earthbound19bot) every time a user interacts with it. The generative art is entitled "By Small and Simple Things" (is twitter overwhelmed) generative art (see earthbound.io/blog/by-small-and-simple-things-digital-generative/).

How did I represent the source code of a generative art program as an image? There are ways. Another word for creating images from arbitrary data is "data bending," meaning taking data intended for one purpose and using it or representing it via other common ways of using data. One form of data bending is text to image; that's what I do here.

But the ways to represent code as a "data bent" image which I found when I googled it, I didn't like, so I made my own.

The approach I don't like is to take every three bytes (every 8 zeros or ones) in a source and turn them into RGB values (three values from 0 to 255 for Red, Green and Blue–the color components of almost all digital images you ever see on any screen). Conceptually that doesn't model anything about the data as an image other than confused randomness, and aesthetically, it mostly makes random garish colors (I go into the problems of random RGB colors in this post: earthbound.io/blog/superior-color-sort-with-ciecam02-python/).

A way I like better to model arbitrary data as an image is to map the source data bytes into _one channel_ of RGB, where that one channel fluctuates but the others don't. This has the effect of gauging low and high data points by color intensity or limited variation. In these data bent images here, the green and blue values don't change, but the red ones do. Green is zero, blue is full, and the changes in the source data (mapped to red) make the blue alternate from blue (all blue, no red) to violet (all blue, all red).

My custom script that maps data to an image creates a PPM image from any source data (PPM is a text file that describes pixels). The PPM can be converted to other formats by many image tools (including command line tools and photoshop). This data to image script is over here: github.com/earthbound19/_ebDev/blob/master/scripts/imgAndVideo/data_bend_2PPMglitchArt00padded.sh

Again, the first image here (or maybe not if you're reading a syndicated copy of this post) is the first version of By Small and Simple Things. The second image is from the latest (at this writing). The animation is everything in between, assembled via this other script over here: github.com/earthbound19/_ebArt/blob/master/recipes/mkDataBentAnim.sh

 

To generate random irregular geometry like in these images (for brainstorming art), 1) install Processing http://processing.org/download and 2) download this script I wrote for it https://github.com/earthbound19/_ebDev/blob/master/processing/by_me/rnd_irregular_geometry_gen/rnd_irregular_geometry_gen.pde, then 3) press the "play" (triangle/run) button. It generates and saves pngs and svgs as fast as it can make them. Press the square (stop) button to stop the madness. I dedicate this Processing script and all the images I host generated by it to the Public Domain. The first two images here (you may only see one image if you read a syndication of this post) are tear or contact (many images) sheets from v1.9.16 of the script. Search URL to bring up galleries of output from this script: http://earthbound.io/q/search.php?search=1&query=rnd_irregular_geometry_gen

You probably can't reasonably copyright immediate output from this script, as anyone else can generate the same thing via the same script if they use the same random seed. But you can copyright modifications you make to the output.

Continue reading

What happens if virtual bacteria emit color-mutating waste as they colonize? This 16 megapixel thing happens.

[Later edit: and many other things. I have done many renders from this script, and evolved its functionality over time.]

2019_10_04__16megapixels__bbeb28_colorGrowth-Py.png
2019_10_04__16megapixels__bbeb28_colorGrowth-Py.png

Inspired by this computer generated contemporary art post (and after I got the script to work and posted it here), I wondered what the visual result would be from an algorithm like this:

– paint a canvas with a base color
– pick a random coordinate on it
– mutate the color at that coordinate a bit
– randomly walk in any direction
– mutate a bit from the previous color, then drop that color there
– repeat (but don't repeat on already used coordinates)
– if all adjacent coordinates have been colored, pick a new random coordinate on the canvas [later versions of the script, which has evolved over time: OR DIE]
– repeat (this is less necessary if the virtual bacteria colonize: ) [OR DON'T, as other "living" coordinates will repeat the process
– [Later script versions: activate orphan coordinates that no bacteria ever reached, and start the process with them.]

Then I wondered what happens if the bacteria duplicate–if they create mutated copies of themselves which repeat the same thing, so that you get spreading colonies of color-pooping bacteria.

I got a python script working that accomplished this, and while with great patience it produced amazing output, I was frustrated with the inefficiency of it (a high resolution render took a day), and wondered how to make it faster.

Someone with the username "scribblemaniac" at github apparently took notice of image posts I made linking to this script, and they figured out how to speed it up by many orders of magnitude, and opened a pull request with a new version of the script. (They also added features. And used a new file name. I merged the request.) [Edit: I later merged their script over the original name, and copied my original script to color_growth_v1.py.] The above image is from the new version. It took ~7 minutes to render. The old version would have taken maybe 2 days. (If the link to the new version goes bad it's because I tested and integrated or copied the new version over the old file name).

In a compiled language, it might be much faster.

I did all this unaware that someone else won a "code golf" challenge by coming up with the same concept, except using colors differently. (There are all kinds of gorgeous generative art results in various answers there!–go have fun and get lost in them!) Their source code is down and forsaken apparently, but someone in the comments describes speeding up the process in several languages and ultimately making a lighting fast C++ program, the source of which is over here. Breathtaking output examples are over here. Their purpose is slightly different: use all colors in the RGB color space. Their source could probably be tweaked to use all colors from a list.

Here are other outputs from the program (which might now show up in syndicated posts–look up the original post URL given.)

2019_10_04__16_49_47__ca6505_colorGrowth-Py
color growth script output
2019_10_04__17_57_32__755c0c_colorGrowth-Py
color growth script output
2019_10_04__17_59_22__989252_colorGrowth-Py
color growth script output
color growth script output
color growth script output
color growth script output with default settings but high resolution
color growth script output with default settings but high resolution
color growth script output + high res
color growth script output + high res

These are from randomly chosen RGB colors, which, as I went into in another post, tend to produce horrible color combinations. Le sigh, random pick from CIECAM02 space might be awesome..

I dedicate all the images in this post to the Public Domain.

BSaST v0.9.13 seed 1713832960 frame 133

I wrote a script in the Processing language which randomly generates colored, nested circles on a grid akin to my cousin Daniel Bartholomew's work of the same title. When the Processing script runs, it animates the circles, and if you tap on them, their color animates. I entered it in the Springville Museum of Art's 34th Spiritual and Religious Art of Utah Contest (if it makes it into the show, it will be displayed on a large kiosk). [2019-10-04 UPDATE: This work made it into the show! It was on display at the Springville Museum of Art, October 16, 2019 – January 15, 2020.] Here is the artist statement:

"..by small and simple things are great things brought to pass.." -Alma 37:6

Tap or swipe circles and watch what happens!

Just like your interaction changes this work, I believe that God interferes with reality–sometimes to dazzling effect. I believe that mere existence is amazing besides, or if not, filled with promise.

Images you interact with are "tweeted" @earthbound19bot (Twitter social media).

I coded this in the Processing language with Daniel Bartholomew's support and input. It imitates his original pen and marker works of the same title, with animation, and generating any of about 4.3 billion possible variations at intervals.

BSaST v0.9.13 seed 1713832960 frame 133

I dedicate all these images to the Public Domain. I can literally make 4.3 billion other ones if anyone "steals" these. [UPDATE 2: The kiosk saved as many user-generated works from interactions with it as it could, and I've archived them in my "firehose" gallery here.]

Continue reading

[UPDATE: there's a lot more to light and color science than I perhaps inaccurately get at in this post. Also, try color transforms and comparisons (if the latter is possible?) in Oklab.]

It turns out that all of the digital color models in wide use are often bad for figuring out which of any two colors are "nearest," according to humans.

Sometime in my web meanderings, I stumbled on information about the CIECAM02 color model (and space), including a Python library that uses it and a (gee-wow astonishing at what it can do with color) free Photoshop-compatible plugin that manipulates images in that space. [EDIT 2020-10-07: link to that plugin down and I can't find the plugin on the open web anymore. Here's a link to my own copy of it (in a .zip archive)] If you do color adjustments on images using an application that's compatible with Photoshop plugins (a lot of programs are), go get and install that plugin now! Also: a CIECAM02 color space browser app (alas, Windows only it seems?).

I wrote a Python script that uses that library to sort any list of RGB colors (expressed in hex) so that every color has the colors most similar to it next to it. (Figuring out an algorithm that does this broke my brain–I guess in a good way.) (I also wrote a bash script that runs it against all .hexplt files (a palette file format which is one RGB hex color per line) in a directory.)

The results are better than any other color sorting I've found, possibly better than what very perceptive humans could accomplish with complicated arrays of color.

Here's an image of Prismacolor marker colors, in the order that results from sorting by this script (the order is left to right, top to bottom) :

Prismacolor marker colors, sorted by nearest perceptual
Prismacolor marker colors, sorted by nearest perceptual

For before/after comparison, here is an image from the same palette, but randomly sorted; the script can turn this ordering of the palette into the above much more contiguous-appearing:

Prismacolor marker set colors, random order
Prismacolor marker set colors, random order

(It's astonishing, but it seems like any color in that palette looks good with any other color in it, despite that the palette comprises every basic hue, and many grays and some browns. They know what they are doing at Prismacolor. I got this palette from my cousin Daniel Bartholomew, who uses those colors in his art, which you may see over here and here.)

Some other palettes which I updated by sorting them with this script are on display in my GitHub repo of collected color palettes.

Here is another before and after comparison of 250 randomly generated RGB colors sorted by this script. You might correctly guess from this that random color generation in the RGB space often produces garish color arrays. I wonder whether random color generation somehow done in a model more aligned with human perception (like CIECAM02) would produce more pleasing results.

250 randomly generated RGB colors
250 randomly generated RGB colors
250 randomly generated RGB colors, sorted in CIECAM02 color space
250 randomly generated RGB colors, sorted in CIECAM02 color space

See how it has impressive runs of colors very near each other, including by tint or shade, and good compromises when colors aren't near, with colors that are perceptually further from everything at the end. Also notice that darker and lighter shades of the same hue tend to go in separate lighter/darker runs–with colors that well interpolate into those runs in between!–instead of having lights and darks in the same run, where the higher difference of tint/shade would introduce a discontiguous aspect.

Tangent: in RGB space, I tested a theory that a collection of colors which add (or subtract!) to gray will generally be a pleasing combination of colors–and found this to be often true. I would like to test this theory in the CIECAM02 color space. I'd also like to test the theory that colors randomly generated in the CIECAM02 space will generally be more pleasing alone and together (regardless of whether they were conceived as combining to form gray).

I really can't have those as the last images in this post. Here is a favorite palette.

Lake Bonnevile Desert Colors
Lake Bonnevile Desert Colors

Here's the URL to that palette (in my palette repository).

[Edit 2020-10-07: I had renamed or moved several things I linked to from this post, which broke links. I corrected the links after a reader kindly requested to know where things had gone.]

average of diff and average views of satellite photos of the Earth
average of diff and average views of satellite photos of the Earth
average of diff and average views of satellite photos of the Earth
average of diff and average views of satellite photos of the Earth

This is one of thousands of images like it (each unique though) I've recently generated with an experimental process. The experiment is a success if I may say so.

This is the process to (potentially) get some way cool procedural images from satellite (or any!) images, accomplished with a new script at https://github.com/earthbound19/_ebArt/blob/master/recipes/diff_avg_supercomposites.sh :

Phase I.
– collect several cool satellite images of civilization and/or wilderness, e.g. from this site: https://earthview.withgoogle.com/
– for every image pair in the collection, make a "diff" image (subtract the RGB values of every pixel in one image from every pixel in the other image), and save the result
– for every image pair in the collection, make an averaged image (average the RGB values of every pixel in one image with another), and save the result
Phase II.
– liberally delete less impressive results
Phase III.
– for every diffed result, average it with an averaged result and save that.
– for every averaged result, subtract (diff) a diffed result.
– liberally delete less impressive results. Good luck–with 17 source images and heavy pruning in Phase II, this will give me 17k+ results, so far all of them compellingly cool.

(Phase IV: sort all results by approx. nearest most similar and string them together in a movie of crossfades to see works between the works.)

(Phase V: accidentally produce glitch art because your computer ran out of hard drive space and memory doing all this, but the processing script keeps calling the utilities that do this, and the utilities break. I'll post some glitch results later).

(Phase VI: realize you have a storage and bandwidth problem for your new many gigabytes of images.)

"Narmth" is an invented adjective. The hex color scheme used for the color variants here is at: https://github.com/earthbound19/_ebdev/blob/master/scripts/imgAndVideo/palettes/recreated_palette_00001_narmth.hexplt

See http://s.earthbound.io/4q for archive, print and use options. ~ Doodled, scanned, fixed up and vectorized by yours truly. A hoity-toity robot talks about this at http://s.earthbound.io/artgib
Work 00090 variant 2 random color fills from color scheme narmth

This first is vector art (an svg), which you may save and reuse. You may reuse these works freely under Creative Commons Attribution 4. I'd appreciate credit in reuse.

The animated variant is concieved as unobtrusive decorative video art. Or maybe it would be distracting. I don't know, because I don't know who displays art as such. Do you?

See http://s.earthbound.io/2y for original, print and usage ~ The swirling strokes in this were achieved with the liquid ink bristlecone preset in Corel Painter 2016 ~ A hoity-toity robot talks about this at http://s.earthbound.io/artgib
Work 00099 abstraction (cyan, blue, orange, red)

The swirling strokes in this were achieved with the liquid ink pine preset in Corel Painter 2016. Tap or click image for ~2K resolution, free for personal use. Here's a link to prints and merchandise available at pixels.com, and another link to prints available at ImageKind at up to ~ 35" x 56".

The following variant and resource images which I made along the way, I release into the Public Domain:

Variant via the Filter Forge "side to side" filter by Skybase:

An alpha resource via the Filter Forge Terrain Hightfield Generator by LigH; I used this (and variants of it) as a transparency channel in filter layers to make uneven interesting application of filters: