This is from parameter set 2 of a vogel spiral dots Processing language script. The next post may be an animation of this or a link to an animation. Those parameters are:

int backgroundDotSize = 40;
//int foregroundDotSize = 15;
int vogelPointsDistance = 13;
color[] backgroundDotRNDcolors = {
, , , , , , , #008080, , , , , , , , , , , , , , , , , , , , , ,
};

— and the canvas or screen size set to HD video in setup() function with this call:
size(1920,1080);

The random seed state for the wiggling of the dots wasn't captured; it is unknown.

The saved images were strung together in an animation with my script ffmpegAnim.sh with these positional parameters:

(script call),18fps source, 30fps target, quality 13, frame image format png:
ffmpegAnim.sh 18 30 13 png

This is a still from parameter set 2 of the recently posted vogel spiral dots Processing language script. The next post may be an animation of this or a link to an animation. Those parameters are:

int backgroundDotSize = 40;
//int foregroundDotSize = 15;
int vogelPointsDistance = 13;
color[] backgroundDotRNDcolors = {
, , , , , , , #008080, , , , , , , , , , , , , , , , , , , , , ,
};

— and the canvas or screen size set to HD video in setup() function with this call:
size(1920,1080);

The random seed state for the wiggling of the dots wasn't captured; it is unknown.

The saved images were strung together in an animation with my script ffmpegAnim.sh with these positional parameters:

(script call),18fps source, 30fps target, quality 13, frame image format png:
ffmpegAnim.sh 18 30 13 png

This is the first frame of output from a Processing language script that animates colored dots in a vogel spiral layout. It uses the dawesometoolkit Processing library. A post of the animated result may appear soon where you're seeing this syndicated (if you're lucky); if not, check soon at the source from whence this originates.

The Processing source script is at:

https://github.com/earthbound19/_ebDev/blob/master/scripts/processing/vogel_spiral_dots_animated/vogel_spiral_dots_animated.pde

This publication uses v1.0.0 of that script, with parameter set 1, which is hard-coded in it:

int backgroundDotSize = 20;
int foregroundDotSize = 15;
int vogelPointsDistance = 13;
color[] backgroundDotRNDcolors = {
// tweaked with less pungent and more pastel orange and green, from _ebPalettes 16_max_chroma_med_light_hues_regular_hue_interval_perceptual.hexplt:
, , , , , , , ,
, , , , , ,
// omitted because it is used for the foreground dot color:
};

— and the canvas or screen size set to HD video in setup() function with this call:
size(1920,1080);

The random seed state for the wiggling of the dots wasn't captured; it is unknown.

The saved images were strung together in an animation with my script ffmpegAnim.sh with these positional parameters:

(script call),18fps source, 30fps target, quality 13, frame image format png:
ffmpegAnim.sh 18 30 13 png

[EDIT: re-watching the animation, I think it's too slow. I'll make future animations around ~2x faster.]

Four HD animations (in one video) of color growths generated from my Python script color_growth.py (see http://s.earthbound.io/colorGrowth). The last stops near completion because at this writing that animation is not complete. This post starts with stills of the completed renders, then links to (or includes?) a YouTube upload of it. If you're seeing this post syndicated, you may have to look at the original post to see the video.

YouTube video:

In batch rendering these animations I found that the renders proceeded far too slowly for my wants (days for one render), even though someone sped up my script a lot: https://github.com/earthbound19/_ebDev/pull/21. Also, the resulting animations had wonky perceptual speed (faster at start, slowing toward middle and end), so I updated the script to overcome that.

I overcame that by adding a –RAMP_UP_SAVE_EVERY_N option. With this option enabled, instead of it saving animation frames every N new painted coordinates, it ramps the number of painted coordinates to wait (before saving a new animation frame) up over time, so that the new rendered area each frame is similar to dragging a selection rectangle from one corner to the other of the canvas. This causes the animation to be perceptually more linear in every growth vector, though technically it's non-linear (and speeding up, or waiting longer between each rendered frame). The result is that renders happen much faster (as fewer frames are saved), and the animation speed seems constant (it no longer seems to slow toward the middle and end. In fact, as it approaches filling corners it seems to race toward them, which is a bit funny and I like it. It still takes a night to render two or three of these, but that's much better than days for one.

(I want to try faster Python interpreters / C transpilers, or a wholesale C port, to see if anything can speed it up much more dramatically.)

For HD animations these have relatively very small file names (only many megabytes, instead of hundreds or thousands of megabytes). I believe it's because video compressors exploit the fact of parts of an image remaining the same, which is always true for an increasingly large _and_ diminishing area in these animations.

Color Growth 2019-10-04 22-46-35 c326e1
Color Growth 2019-10-04 22-46-35 c326e1
Color Growth 2019-10-04 22-46-35 c326e1
Color Growth 2019-10-04 22-46-35 c326e1

YouTube video publication of animation: https://youtu.be/9TVgyB-yYqE

This is via cellular automata from a Python language script I wrote and which another programmer improved. It simulates imaginary bacteria that leave color-mutating waste as they colonize. I posted about this script with other example output in more detail earlier, here:

Color growth from virtual bacteria (generative)

The python script which generates this and virtually infinite varieties of works like it is at: http://s.earthbound.io/colorGrowth

Content of source settings file 2019_10_04__22_46_35__c326e1_colorGrowth-Py.cgp:

–WIDTH 1920 –HEIGHT 1080 -a 175 -q 2 –RECLAIM_ORPHANS 1 -n 1 –RSHIFT 5 -b [252,251,201] -c [252,251,201] –BORDER_BLEND True –TILEABLE False –STOP_AT_PERCENT 1 –RANDOM_SEED 2005294276 –GROWTH_CLIP (0,5) –SAVE_PRESET True

This work and the video, I dedicate to the Public Domain.

To generate random irregular geometry like in these images (for brainstorming art), 1) install Processing http://processing.org/download and 2) download this script I wrote for it https://github.com/earthbound19/_ebDev/blob/master/processing/by_me/rnd_irregular_geometry_gen/rnd_irregular_geometry_gen.pde, then 3) press the "play" (triangle/run) button. It generates and saves pngs and svgs as fast as it can make them. Press the square (stop) button to stop the madness. I dedicate this Processing script and all the images I host generated by it to the Public Domain. The first two images here (you may only see one image if you read a syndication of this post) are tear or contact (many images) sheets from v1.9.16 of the script. Search URL to bring up galleries of output from this script: http://earthbound.io/q/search.php?search=1&query=rnd_irregular_geometry_gen

You probably can't reasonably copyright immediate output from this script, as anyone else can generate the same thing via the same script if they use the same random seed. But you can copyright modifications you make to the output.

Continue reading

BSaST v0.9.13 seed 1713832960 frame 133

I wrote a script in the Processing language which randomly generates colored, nested circles on a grid akin to my cousin Daniel Bartholomew's work of the same title. When the Processing script runs, it animates the circles, and if you tap on them, their color animates. I entered it in the Springville Museum of Art's 34th Spiritual and Religious Art of Utah Contest (if it makes it into the show, it will be displayed on a large kiosk). [2019-10-04 UPDATE: This work made it into the show! It was on display at the Springville Museum of Art, October 16, 2019 – January 15, 2020.] Here is the artist statement:

"..by small and simple things are great things brought to pass.." -Alma 37:6

Tap or swipe circles and watch what happens!

Just like your interaction changes this work, I believe that God interferes with reality–sometimes to dazzling effect. I believe that mere existence is amazing besides, or if not, filled with promise.

Images you interact with are "tweeted" @earthbound19bot (Twitter social media).

I coded this in the Processing language with Daniel Bartholomew's support and input. It imitates his original pen and marker works of the same title, with animation, and generating any of about 4.3 billion possible variations at intervals.

BSaST v0.9.13 seed 1713832960 frame 133

I dedicate all these images to the Public Domain. I can literally make 4.3 billion other ones if anyone "steals" these. [UPDATE 2: The kiosk saved as many user-generated works from interactions with it as it could, and I've archived them in my "firehose" gallery here.]

Continue reading