logo

Art Jam: Experiments with Pixelization

I recently had the opportunity to try and answer a question I've thought about over the years. Well, it's more of a series of questions. But it goes something like this:

Consider a subject, and for the sake of this imaginary prompt, let's say the subject is a slice of watermelon. If our watermelon was portrayed in different visual mediums, like a screenshot of a 3d rendered model versus an oil painting, each would look quite different. There would be little in common with these two images except that their subject was a slice of watermelon.

Now imagine you scaled down those mental images of watermelons: 25%, 50%, 75%. Each time they are scaled down the visual information is destroyed to a certain extent. Scaled all the way to just 1px and they'd be identical, or close enough that it wouldn't matter. That means somewhere in the downsampling process there exists a lowest common denominator between the slices of watermelon, where they'd become similar enough to disregard the source medium of representation.

What would they look like?

I sought to answer that question.

The Constraints

Creativity works best with restrictions. I defined loose, fictional criteria for the experiment:

  1. The goal is to learn PixaTool first and foremost. Results don't matter.
  2. Length of time would shape the decisions on quality, automation, etc. A faux deadline of one week was chosen
  3. The subjects would be “fantasy NPC portraits in a retro CRPG game”, and I needed to make 100 of them in the week
  4. They'd be raster images in pixel art style at 256px
  5. The software I'd use would be Adobe Photoshop CS6, PixaTool, and Aseprite. I would end up using just the first two.

With that list of constraints in place I started exploring possibilities.

Source Portraits

I opted for three source images to use as my test subjects. I wanted diversity in the subject portrayed, the backgrounds, lighting, and medium, but wanted to keep them to similar poses and have a large enough resolution to crop down to bust portraits. I cropped each one and started running them through different tests in both Photoshop and PixaTool.

I don't own copyright to these images.

Source: Game Source: Photo Source: Painting
All three cropped to a square.

Tests

With only a week in my imaginary deadline I really had to dive in and start learning as much, and as quickly, as possible. I won't list everything I tried here, but there were a lot of attempts; some of them bore fruit.

Color Palette

One of the key areas I felt was necessary toward imposing a style on these images was to force them into a constrained color palette. Photoshop offered gradient maps and indexed color as options, which necessitated learning about several different files types to get colors moving between different software: .asc, .ase, .pal, .act…the list goes on. PixaTool offered the easiest method with its Palette feature, a trend that would continue throughout the rest of testing. I don't know what algorithm it uses, but it seemed similar to indexing colors in Photoshop.

Lospec makes it easy to browse palettes, and I already had favorites saved that I use in other projects. I ran through a wide range to get a sense of what worked and what didn't. I would like to go back and learn more on this topic, as I stumbled along mostly by how the output looked without an understanding of why certain ones worked and others didn't. In the end it was a 32-color palette that came with PixaTool that I liked the most, with a cryptic (to me, at least) name of justing-girl-32.

Winner!
Top to Bottom: Quake_192, 128matriax, Vinik24, Reha16, Nyx8, and justing-girl-32.

Downsampling

PixaTool has various inputs controlling things like dithering, pixelization steps, contrast, etc. I spent the first two days of the week simply playing with settings and color palettes, iterating and evolving as I found qualities I liked. With the stipulation of producing 100 NPC portraits in my remaining days, I opted for choices that skewed towards middle-grounds. I wanted something that looked *good enough* across a large array of potential source images rather than trying to dial in for highest quality.

If you have PixaTool and want to explore the settings you can import the preset file.

I have folders upon folders of exports, like this Nyx8 palette test.

Style Choices

It became clear early on that I couldn't rely entirely on destroying fidelity to unify images stylistically. I also had to employ techniques to mask differences and blend the different source images together.

  • Doing a fast-and-loose marquee around each subject and swapping the background for a set piece helped anchor them into a cohesive whole. This step was dangerous, though, because any kind of manual effort on my part would not be scalable. Photoshop CC purportedly has AI-assisted features that would be a helpful in this regard. An action that could perform a sloppy lasso and delete would save lots of time.
  • Using Photoshop's Render → Lighting plugin I threw a spotlight on each portrait that helped unify the lighting (to some degree).
  • A Filter helped blur out differences in the source mediums (I used Dry Brush).

I also created a simple border to go around each piece as if it were part of some game UI. Overall, the only steps that couldn't be easily automated into a script were the selection portions up front, and for this experiment, I am okay with that.

Cropped game asset background used as the setting behind each portrait. I could see having several options to mix-and-match.

Results

I'm happy with the outcome here. These ladies look like they had a quest or rumors to share with the player.

I was shooting for that '90s-era CRPG' look, and I feel like the outputs here hit that mark fairly well. With actions to automate the steps in Photoshop, and the settings saved in a preset in PixaTool, the 'labor time' for each image was less than two minutes. That's a scalable number for this contrived scenario, and at that rate, I could create the 100 portraits in an afternoon if I had the sources ready to go.

Extrapolating Sources

Excited that I had something producing mediocre-quality results, I wanted to grab more images to feed into it. With three distinct source mediums in the original test I was confident that most sources could work. In a professional scenario this would be purchased stock photos, perhaps, or stock assets purchasable on a marketplace. For my continued testing, though, I simply grabbed whatever the search engine offered up.

Specifically, I wanted to get a different kind of subject that demonstrate whether or not this pipeline was robust enough to accept it. I grabbed digital art of a knight and was pleasantly surprised.

At this point I realized the only time-sink left would be sourcing the…source images. As this was just an experiment, and I don't have licenses for 100 NPC photos on-hand, I knew I wasn't going to go much further. For fun, though, I threw a silly webcam photo of myself in the mix, too.

With whatever time savings I'd have (after the hypothetical labor of running 100 images through this pipeline) I'd likely spend it on increasing quality on an individual basis. This is no substitute for hand-placed pixel art, but I could work on fixing the largest errors that would inevitably crop up in the batch of results.

With the advances in generative image AI (controversies aside), I could easily see it fitting into this work stream. That's a topic for another day.

T-Shirts probably don't fit in here Would definitely offer a quest, probably to slay a dragon.

Conclusion

I'm satisfied with the results of this experiment. Most importantly I learned a lot, and as a bonus the results looked of high enough quality to be used as prototypes in a game. As far as the 'pipeline' itself, the flow has several areas where efficiency could be gained. I'd love to work on this project more in the future, and potentially throw them into Aseprite for simple animations.

One of my biggest takeaways was how powerful PixaTool is. I only learned the basics, and I wish there was documentation for the settings, but it allowed for results that would have taken exponentially longer to do in Photoshop alone. I've been looking for a good way to test this software and, after this experiment, endorse it.

Everything considered, this was a fun project. And while I still don't know what those slices of watermelon would look like, maybe one day I'll get around to making a retro CRPG to use all of these NPC portraits in.