This is output from an image bomber I coded in Processing, worked up in some impressionist and book illustrative-style presets in Dynamic Auto-Painter Pro (a program that tries to make painterly images from any source image). I then did some layering trickery in Photoshop to blend the styles. The sources for the image bomber were circles in 24 shades of gray aligned to human perception of light to dark (white to black), with some random sizing, squishing, stretching and rotating (which is what the image bomber does)
The purpose of images like this, for me, besides being cool by themselves, is to use them as transparency (alpha) layers for either effect or image layers in image editing programs. For alphas, white areas are opaque and black areas show through.
This is my original work and I dedicate it to the Public Domain. Original image size (see syndication source, or if you're looking at the syndication source, click the thumbnail for the full image) : 3200 x 2392
These images and this animation (there may only be one image if you're reading a syndicated post) are ways of representing snapshots and evolution of source code. The source code of the second image is version 1.2.0 of a Processing script (or program or code) which produces generative art. At this writing, a museum install of that generative-art-producing-program is spamming twitter to death (via twitter.com/earthbound19bot) every time a user interacts with it. The generative art is entitled "By Small and Simple Things" (is twitter overwhelmed) generative art (see earthbound.io/blog/by-small-and-simple-things-digital-generative/).
How did I represent the source code of a generative art program as an image? There are ways. Another word for creating images from arbitrary data is "data bending," meaning taking data intended for one purpose and using it or representing it via other common ways of using data. One form of data bending is text to image; that's what I do here.
But the ways to represent code as a "data bent" image which I found when I googled it, I didn't like, so I made my own.
The approach I don't like is to take every three bytes (every 8 zeros or ones) in a source and turn them into RGB values (three values from 0 to 255 for Red, Green and Blue–the color components of almost all digital images you ever see on any screen). Conceptually that doesn't model anything about the data as an image other than confused randomness, and aesthetically, it mostly makes random garish colors (I go into the problems of random RGB colors in this post: earthbound.io/blog/superior-color-sort-with-ciecam02-python/).
A way I like better to model arbitrary data as an image is to map the source data bytes into _one channel_ of RGB, where that one channel fluctuates but the others don't. This has the effect of gauging low and high data points by color intensity or limited variation. In these data bent images here, the green and blue values don't change, but the red ones do. Green is zero, blue is full, and the changes in the source data (mapped to red) make the blue alternate from blue (all blue, no red) to violet (all blue, all red).
Again, the first image here (or maybe not if you're reading a syndicated copy of this post) is the first version of By Small and Simple Things. The second image is from the latest (at this writing). The animation is everything in between, assembled via this other script over here: github.com/earthbound19/_ebArt/blob/master/recipes/mkDataBentAnim.sh
To generate random irregular geometry like in these images (for brainstorming art), 1) install Processing http://processing.org/download and 2) download this script I wrote for it https://github.com/earthbound19/_ebDev/blob/master/processing/by_me/rnd_irregular_geometry_gen/rnd_irregular_geometry_gen.pde, then 3) press the "play" (triangle/run) button. It generates and saves pngs and svgs as fast as it can make them. Press the square (stop) button to stop the madness. I dedicate this Processing script and all the images I host generated by it to the Public Domain. The first two images here (you may only see one image if you read a syndication of this post) are tear or contact (many images) sheets from v1.9.16 of the script. Search URL to bring up galleries of output from this script: http://earthbound.io/q/search.php?search=1&query=rnd_irregular_geometry_gen
You probably can't reasonably copyright immediate output from this script, as anyone else can generate the same thing via the same script if they use the same random seed. But you can copyright modifications you make to the output.
I wrote a script in the Processing language which randomly generates colored, nested circles on a grid akin to my cousin Daniel Bartholomew's work of the same title. When the Processing script runs, it animates the circles, and if you tap on them, their color animates. I entered it in the Springville Museum of Art's 34th Spiritual and Religious Art of Utah Contest (if it makes it into the show, it will be displayed on a large kiosk). [2019-10-04 UPDATE: This work made it into the show! It was on display at the Springville Museum of Art, October 16, 2019 – January 15, 2020.] Here is the artist statement:
"..by small and simple things are great things brought to pass.." -Alma 37:6
Tap or swipe circles and watch what happens!
Just like your interaction changes this work, I believe that God interferes with reality–sometimes to dazzling effect. I believe that mere existence is amazing besides, or if not, filled with promise.
Images you interact with are "tweeted" @earthbound19bot (Twitter social media).
I coded this in the Processing language with Daniel Bartholomew's support and input. It imitates his original pen and marker works of the same title, with animation, and generating any of about 4.3 billion possible variations at intervals.
BSaST v0.9.13 seed 1713832960 frame 133
I dedicate all these images to the Public Domain. I can literally make 4.3 billion other ones if anyone "steals" these. [UPDATE 2: The kiosk saved as many user-generated works from interactions with it as it could, and I've archived them in my "firehose" gallery here.]
[UPDATE: there's a lot more to light and color science than I perhaps inaccurately get at in this post. Also, try color transforms and comparisons (if the latter is possible?) in Oklab.]
I wrote a Python script that uses that library to sort any list of RGB colors (expressed in hex) so that every color has the colors most similar to it next to it. (Figuring out an algorithm that does this broke my brain–I guess in a good way.) (I also wrote a bash script that runs it against all .hexplt files (a palette file format which is one RGB hex color per line) in a directory.)
The results are better than any other color sorting I've found, possibly better than what very perceptive humans could accomplish with complicated arrays of color.
Here's an image of Prismacolor marker colors, in the order that results from sorting by this script (the order is left to right, top to bottom) :
For before/after comparison, here is an image from the same palette, but randomly sorted; the script can turn this ordering of the palette into the above much more contiguous-appearing:
(It's astonishing, but it seems like any color in that palette looks good with any other color in it, despite that the palette comprises every basic hue, and many grays and some browns. They know what they are doing at Prismacolor. I got this palette from my cousin Daniel Bartholomew, who uses those colors in his art, which you may see over here and here.)
Here is another before and after comparison of 250 randomly generated RGB colors sorted by this script. You might correctly guess from this that random color generation in the RGB space often produces garish color arrays. I wonder whether random color generation somehow done in a model more aligned with human perception (like CIECAM02) would produce more pleasing results.
See how it has impressive runs of colors very near each other, including by tint or shade, and good compromises when colors aren't near, with colors that are perceptually further from everything at the end. Also notice that darker and lighter shades of the same hue tend to go in separate lighter/darker runs–with colors that well interpolate into those runs in between!–instead of having lights and darks in the same run, where the higher difference of tint/shade would introduce a discontiguous aspect.
Tangent: in RGB space, I tested a theory that a collection of colors which add (or subtract!) to gray will generally be a pleasing combination of colors–and found this to be often true. I would like to test this theory in the CIECAM02 color space. I'd also like to test the theory that colors randomly generated in the CIECAM02 space will generally be more pleasing alone and together (regardless of whether they were conceived as combining to form gray).
I really can't have those as the last images in this post. Here is a favorite palette.
[Edit 2020-10-07: I had renamed or moved several things I linked to from this post, which broke links. I corrected the links after a reader kindly requested to know where things had gone.]
Base work created by Filter Forge auto-collage filter custom setting. I probably also used a custom variant of the SideToSide filter; that in built up alpha and hue layer variations to produce rectangular hue/tone variety. I might like to call this post-plasticism (after Piet Mondrian's neoplasticism; this is structurally similar but uses any color).
Insert illustration by yours truly, of planet Hebe from the story in subject. The story release announcement post is at this link. The high resolution image (tap or click the below image) is free for personal use.
This first is vector art (an svg), which you may save and reuse. You may reuse these works freely under Creative Commons Attribution 4. I'd appreciate credit in reuse.
The animated variant is concieved as unobtrusive decorative video art. Or maybe it would be distracting. I don't know, because I don't know who displays art as such. Do you?
Pencil decorative lines doodled and scanned, then fixed up.
This animated sequence of variants was accomplished by random selection of colors and fill from the list tigerDogRabbit_HexColors.txt at: http://s.earthbound.io/ColorSchemesHex and these scripts (also from _devtools): potraceAllBMPs.sh, BWsvgRandomColorFill.sh, renumberFiles.sh [svg], allSVG2img.sh, ffmpegAnim.sh.
Tools used: A flatbed scanner, Photoshop, Adobe Illustrator (for svg node reduction while preserving virtually identical appearance), Inkscape, LibreOffice draw, k-meleon (for quick svg previews), cygwin (to enable all of the listed .sh scripts), ffmpeg (to create the video via ffmpegAnim.sh) and svgo_optimize.sh (which has a nodejs and svgo module dependency; you can also just use the SVGOMG web service (do a web search–you can use it to optimize an svg and prepare it for use by BWsvgRandomColorFill.sh).
The following variant and resource images which I made along the way, I release into the Public Domain:
Variant via the Filter Forge "side to side" filter by Skybase:
An alpha resource via the Filter Forge Terrain Hightfield Generator by LigH; I used this (and variants of it) as a transparency channel in filter layers to make uneven interesting application of filters: