It's been quite a while since I've blogged, and longer since I've publicly released something as an OSS tool.
This is, in part, due to the amount of work that goes into crafting good quality content, especially when I want to demonstrate tools that I use in practice on projects.
It isn't for want of things to discuss - I have plenty of things I want to demonstrate & show off - but most of my client work is under a non-disclosure agreement (NDA). As a result, any screenshots I post that are anything to do with their code, involve a lot of manual work, scrubbing namespaces, code comments, access keys, and so on.

Last year I helped my associate David Giard out with his introduction to Cognitive Services, where he posted handwritten letters to the Computer Vision API and subsequently be able to retrieve coordinate data for individuals words and sentences. Building on this, I decide I'd use the popular ImageSharp library to partner up with the Computer Vision API.

An example of my initial effort is below (the red block is mine, as I haven't gotten to re-orientating the image yet - I may consider it as a feature request, though):

An NDepend screenshot, after having been passed through Trash PaNDA to remove a client's name from code namespaces

I plan on using this tool an awful lot, as it now means I can post a lot more to do with my work on a regular basis without being in breach of my obligations to my clients on projects.

So, let's dive into code!

The main Trash PaNDA method for processing images

Lines 32 to 43 handle upsizing the image - I initially had problems with text recognition with small fonts in a 1920x1080 (1080p) image. Azure Computer Vision will accept an individual image up to 4MB or 4200px in either direction, hence the upscaling code (which is optional, but recommended on high-resolution screenshots).

Lines 45 to 48 send the MemoryStream to the Computer Vision API (there are a number of useful methods on the ComputerVisionClient class, not just PrintedText recognition).

Lines 50 to 53 loop through the coordinates for the words that have been found, and apply a box-blur rectangle to the image at the given coordinates.

Lines 55-57 shrink the image back down to it's original size, and 59 to 62 returns a byte array, which you can then pass to file or memory operations.

Source over on Github, and I'll be making this into a Nuget package soon. Let me know what you think in the comments!