Jekyll2022-10-22T07:12:19+00:00https://www.lincs.dev/feed.xmlDo you remember?Occasional notes to my future self & perhaps othersMark IngramBasic CDK deployment pipeline with GitHub Actions2020-07-08T00:00:00+00:002020-07-08T00:00:00+00:00https://www.lincs.dev/blog/github-actions-for-cdk<h1 id="introduction">Introduction</h1>
<p>At the time of writing <a href="https://github.com/pricing">Github Actions</a> has a free tier allowing 2,000 minutes a month of builds for private repos. For smaller / side projects that are already hosted on GitHub it is straightforward to setup a PR based workflow to automate the deployment process.</p>
<p>A CI/CD process suitable for a small project could be:</p>
<ul>
<li>PR workflow:
<ul>
<li>on every pull request commit targeting main branch:
<ul>
<li>run all tests</li>
<li>run <code class="language-plaintext highlighter-rouge">cdk synth</code> and to produce a Cloud Assembly directory (cdk.out) with the latest changes</li>
<li>run <code class="language-plaintext highlighter-rouge">cdk diff</code> with the produced Cloud Assembly</li>
<li>post diff as a comment to the PR</li>
</ul>
</li>
</ul>
</li>
<li>Main workflow:
<ul>
<li>on every commit to main branch:
<ul>
<li>acquire cdk.out artifact from the PR workflow synth</li>
<li>run <code class="language-plaintext highlighter-rouge">cdk apply</code> with that Cloud Assembly</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>Currently Github actions don’t make it too easy to share artifacts between different workflows (see <a href="https://github.com/actions/download-artifact/issues/3">https://github.com/actions/download-artifact/issues/3</a>). It is now possible via wrangling the APIs but until it gets polished we’ll amend the main workflow as follows:</p>
<ul>
<li>Main workflow:
<ul>
<li>on every commit to main branch:
<ul>
<li><em>run <code class="language-plaintext highlighter-rouge">cdk synth</code> and to produce a Cloud Assembly directory (cdk.out) with the latest changes</em></li>
<li>run <code class="language-plaintext highlighter-rouge">cdk apply</code> with the produced Cloud Assembly</li>
</ul>
</li>
</ul>
</li>
</ul>
<h1 id="pr-workflow">PR Workflow</h1>
<p>Using the same repo as <a href="./2020-06-30-bazel-clojure.md">previously</a> introduced <a href="https://github.com/markdingram/bazel-cdk-clojure">https://github.com/markdingram/bazel-cdk-clojure</a>, a <a href="https://github.com/markdingram/bazel-cdk-clojure/blob/fc3d5b4259fe825e945227e70a6aa98f0fdbdd40/.github/workflows/pr.yml">pr.yml workflow</a> is added.</p>
<blockquote>
<p>Aside: I lost count of the number of broken links I saw from not using permanent Github links - to get a permalink simply press <code class="language-plaintext highlighter-rouge">y</code></p>
</blockquote>
<p>While the generation of the cdk.out directory uses Bazel, the diff / apply of that product is done without reference to Bazel using an NPM installed CDK binary. Simplifying down this example for a pure NPM Javascript project would be straightforward.</p>
<p>The workflow shouldn’t be complicated to follow, in the <code class="language-plaintext highlighter-rouge">build</code> job the steps of interest are:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>- name: Test
run: |
bazel test //...
- name: Synth
run: |
bazel build infra:synth
find dist/bin/infra/cdk.out -type d -exec chmod 0755 {} \;
- name: Upload Cloud Assembly
uses: actions/upload-artifact@v1
with:
name: cdk.out
path: dist/bin/infra/cdk.out
</code></pre></div></div>
<p>As required run all tests first. A Bazel synth rule has been added to output the Cloud Assembly <code class="language-plaintext highlighter-rouge">dist/bin/infra/cdk.out</code> via <code class="language-plaintext highlighter-rouge">bazel build infra:synth</code>.</p>
<p>As of now CDK likes to add ‘.cache’ directories inside the cdk.out directory when uploading assets to S3. Given all Bazel output is read only make the directories writable prior to uploading as an artifact.</p>
<p>In the <code class="language-plaintext highlighter-rouge">diff</code> job the steps of note are:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>- name: Download Cloud Assembly
uses: actions/download-artifact@v1
with:
name: cdk.out
- name: Run CDK diff
run: node_modules/.bin/cdk diff -c aws-cdk:enableDiffNoFail=true --no-color --app cdk.out "*" 2>&1 | tee cdk.log
- name: Add comment to PR
env:
URL: $
GITHUB_TOKEN: $
run: |
jq --raw-input --slurp '{body: .}' cdk.log > cdk.json
curl \
-H "Content-Type: application/json" \
-H "Authorization: token $GITHUB_TOKEN" \
-d @cdk.json \
-X POST \
$URL
</code></pre></div></div>
<p>JQ is used to slurp the raw text output of CDK diff into a JSON message suitable for uploading to Github. Presumably there is a Github action that would encapsulate the PR comment posting, but the Rest API is so straightforward adding a dependency for that seems overkill.</p>
<p>With this workflow in place a CDK diff should spring onto the PR after a few minutes:</p>
<p><img src="/assets/cdk_diff_pr.png" alt="cdk_diff.png" /></p>
<p>It is also useful to setup a Github Branch protection rule that ensures the PR workflow is successful prior to merge:</p>
<p><img src="/assets/gh_branch_protection.png" alt="gh_branch_protection.png" /></p>
<p>The main workflow is similar to the PR one, but with a <code class="language-plaintext highlighter-rouge">cdk deploy</code> step instead.</p>
<h1 id="notes">Notes</h1>
<ul>
<li>
<p>I first encountered showing planned changes on Pull Requests from <a href="https://www.runatlantis.io">Atlantis</a> a couple of years ago. I believe Hashicorp hired the maintainer from that to work on <a href="https://www.terraform.io/docs/cloud/index.html">Terraform cloud</a>, so worth checking both of these for inspiration!</p>
</li>
<li>
<p>The Amazon equivalent <a href="https://aws.amazon.com/codebuild/pricing">AWS CodeBuild</a> free tier is only a miserly 100 minutes. After the free tiers run out the AWS lowest tier of build instances (general1.small) is cheaper than GitHub ($0.005/minute for a general1.small vs $0.008), but has less RAM (3GB compared to 7GB). For larger projects I’d be inclined to pursue AWS (configured via CDK of course!). The CodeBuild / CodePipeline constructs & further integration over time with CDK / CloudFormation will likely allow more detailed/involved workflows to be constructed.</p>
</li>
</ul>Mark IngramIntroductionDocker Volumes on host with SELinux2020-07-03T00:00:00+00:002020-07-03T00:00:00+00:00https://www.lincs.dev/blog/docker-volumes-selinux<p>(A reminder for next time I end up scratching my head on this)</p>
<p>When mounting a Volume on a host with SELinux enabled use the add a trailing <code class="language-plaintext highlighter-rouge">:Z</code> to the volume syntax, e.g.:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run -v /var/db:/var/db:Z rhel7 /bin/sh
</code></pre></div></div>
<p>This will label the mounted directory to allow access from the container - read more here:</p>
<p><a href="http://www.projectatomic.io/blog/2015/06/using-volumes-with-docker-can-cause-problems-with-selinux">http://www.projectatomic.io/blog/2015/06/using-volumes-with-docker-can-cause-problems-with-selinux</a></p>Mark Ingram(A reminder for next time I end up scratching my head on this)Bazel Clojure2020-06-30T00:00:00+00:002020-06-30T00:00:00+00:00https://www.lincs.dev/blog/bazel-clojure<p>The point of the stuffing <a href="./2020-06-27-clojure-aot.md">Clojure AOT</a> into a Java Annotation processor is now revealed.</p>
<p>There are some existing Clojure rules for Bazel at <a href="https://github.com/simuons/rules_clojure">https://github.com/simuons/rules_clojure</a> - I tried them and whilst they largely work I decided as part of my learning on Bazel to see how far I could push the native Java rules to build Clojure.</p>
<p>This was quite far as it seems - here’s the obligatory example repo - <a href="https://github.com/markdingram/bazel-clojure">https://github.com/markdingram/bazel-clojure</a></p>
<p>Some example commands:</p>
<ul>
<li>build: <code class="language-plaintext highlighter-rouge">bazel build //...</code></li>
<li>repl (socket repl, port 5555): <code class="language-plaintext highlighter-rouge">bazel run repl</code></li>
<li>
<p>test (kaocha): <code class="language-plaintext highlighter-rouge">bazel test //...</code></p>
</li>
<li>query external JARs: <code class="language-plaintext highlighter-rouge">bazel query @maven//... | sort</code></li>
</ul>
<h1 id="notes-on-implementation">Notes on implementation</h1>
<p>Inspired by:
<a href="https://github.com/simuons/rules_clojure">https://github.com/simuons/rules_clojure</a></p>
<p>This repo differs by attempting to build directly on the built in Java rules, which has some implications:</p>
<ul>
<li>
<p>The root folder names <code class="language-plaintext highlighter-rouge">java</code> and <code class="language-plaintext highlighter-rouge">javatests</code> are mandated by these rules (or the trad Maven <code class="language-plaintext highlighter-rouge">src/main/java</code>).</p>
</li>
<li>
<p>AOT is made possible through a <a href="https://docs.bazel.build/versions/master/be/java.html#java_plugin">https://docs.bazel.build/versions/master/be/java.html#java_plugin</a>.
This triggers a Java Annotation Processor that picks up the namespace to AOT from a Java annotation. A bit
roundabout but seems to work quite nicely - whether it is a good idea remains to be seen!</p>
<p>One nice feature of this approach is that the exact files produced by AOT can be separately inspected via: <code class="language-plaintext highlighter-rouge">unzip -l dist/bin/java/example/bin.jar</code></p>
</li>
<li>
<p>Kaocha is integrated via a basic Bazel macro in <a href="rules/kaocha/rules.bzl">rules.bzl</a></p>
</li>
<li>
<p>The Intellij plugin doesn’t add runtime deps as External Libraries. The <code class="language-plaintext highlighter-rouge">tools/intellij/BUILD</code> is
a hack to get at least the direct dependencies showing up in Intellij. Few issues around this,
e.g. <a href="https://github.com/bazelbuild/intellij/issues/1825">https://github.com/bazelbuild/intellij/issues/1825</a>,
<a href="https://github.com/bazelbuild/intellij/issues/490">https://github.com/bazelbuild/intellij/issues/490</a></p>
</li>
</ul>Mark IngramThe point of the stuffing Clojure AOT into a Java Annotation processor is now revealed.Test Run Observability2020-06-28T00:00:00+00:002020-06-28T00:00:00+00:00https://www.lincs.dev/blog/test-observable-driven<p>I’ve been witness to a few monorepo projects where Pull Requests to main are blocked until after a successful
CI pipeline run. In each case the CI pipelines have largely been a blackbox neglected until.. suddenly
a sufficient threshold of pain across the organisation is breached. My tolerance seems lower than most as all
I can see are the sands of productivity draining away from the organisation well before the immediate threshold!</p>
<p>Why not put in observability at the earliest possible time?</p>
<p>This post gives of an overview of a simple feed of test runs from Bazel into <a href="honeycomb.io">Honeycomb</a> via
Bazel’s <a href="https://docs.bazel.build/versions/master/build-event-protocol.html">Build Event Protocol</a> (BEP)
and Honeycomb’s <a href="https://docs.honeycomb.io/getting-data-in/integrations/honeytail/">Honeytail</a> agent.</p>
<h1 id="step-1---output-test-run-from-bazel">Step 1 - Output test run from Bazel</h1>
<p>Add a flag to <code class="language-plaintext highlighter-rouge">.bazelrc</code> to produce an <a href="http://ndjson.org/">NDJSON</a> file containing various build events:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>common --build_event_json_file="/var/log/bazel/build_events.ndjson"
</code></pre></div></div>
<blockquote>
<p>There is also an <code class="language-plaintext highlighter-rouge">bes_backend</code> option to send the data via GRPC. Could be interesting to
see if a Lambda could receive via this route for serverless handling.</p>
</blockquote>
<p>For this exercise the data from the BES <code class="language-plaintext highlighter-rouge">testResult</code> events will be sent through to Honeycomb:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bazel test //feed/...
...
Executed 1 out of 1 test: 1 fails locally.
INFO: Build Event Protocol files produced successfully.
INFO: Build completed, 1 test FAILED, 2 total actions
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> {
"id": {
"testResult": {
"label": "//feed/src/test/clj/clj_stomp/alpha:alpha",
"run": 1,
"shard": 1,
"attempt": 1,
"configuration": {
"id": "9d0af820af00b297c2128aed3f4a3f642a7a422457413b1c89acc467b7badc18"
}
}
},
"testResult": {
...
"testAttemptDurationMillis": "46",
"status": "FAILED",
"testAttemptStartMillisEpoch": "1593379441817",
...
}
}
</code></pre></div></div>
<h1 id="formatting-the-bazel-event">Formatting the Bazel event</h1>
<p>Honeycomb maybe able to consume the events directly - but I didn’t check. Instead JQ is used to flatten and reduce
the output event into a simplified format:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#!/bin/sh
FILTER='select(."id" | objects | has("testResult")) |
{label: ."id"[].label,
id: ."id"[].configuration.id,
testDurationMillis: .testResult.testAttemptDurationMillis | tonumber,
testResultStatus: .testResult.status,
timestamp: (.testResult.testAttemptStartMillisEpoch | tonumber / 1000) | todate }'
tail -f /var/log/bazel/build_events.json 2> /dev/null \
| jq --unbuffered -c "${FILTER}" \
| tee /var/log/bazel/test_events.json
</code></pre></div></div>
<p><a href="/assets/map_test_results.sh">map_test_results.sh</a></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{"label":"//feed/src/test/clj/clj_stomp/alpha:alpha","id":"9d0af820af00b297c2128aed3f4a3f642a7a422457413b1c89acc467b7badc18","testDurationMillis":46,"testResultStatus":"FAILED","timestamp":"2020-06-28T21:21:17Z"}
</code></pre></div></div>
<blockquote>
<p><a href="https://jqplay.org">https://jqplay.org</a> was invaluable here</p>
</blockquote>
<h1 id="honeycomb">Honeycomb</h1>
<p>Signing up to Honeycomb and installing the <a href="https://docs.honeycomb.io/getting-data-in/integrations/honeytail/">Honeytail agent</a> is very straightforward.</p>
<p>Once the agent is installed edit <code class="language-plaintext highlighter-rouge">/etc/honeytail/honeytail.conf</code>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ParserName = json
WriteKey = XXXX
LogFiles = /var/log/bazel/test_events.json
Dataset = bazel
</code></pre></div></div>
<p>Start the agent & begin running Bazel tests - all being well the data will soon be flowing into the Honeycomb UI ready for analysis!</p>
<p><img src="/assets/honeycomb.png" alt="honeycomb.png" /></p>
<h1 id="summary">Summary</h1>
<p>This post showed how to get started with test run outcomes into Honeycomb in a low tech manner. Even this minimal setup
could provide the necessary observability into test pipelines to detect and take action prior to a blowout!</p>
<p>A couple of links to checkout -
<a href="https://www.heavybit.com/library/podcasts/o11ycast/ep-21-learning-systems-with-jessica-kerr">https://www.heavybit.com/library/podcasts/o11ycast/ep-21-learning-systems-with-jessica-kerr</a></p>
<p><a href="https://thenewstack.io/a-next-step-beyond-test-driven-development">https://thenewstack.io/a-next-step-beyond-test-driven-development</a></p>
<h1 id="addendum-2020-07-08">Addendum (2020-07-08)</h1>
<p>Discovered the <a href="https://github.com/honeycombio/buildevents">https://github.com/honeycombio/buildevents</a> project which looks like it could contribute to a more comprehensive approach building out CI observability.</p>Mark IngramI’ve been witness to a few monorepo projects where Pull Requests to main are blocked until after a successful CI pipeline run. In each case the CI pipelines have largely been a blackbox neglected until.. suddenly a sufficient threshold of pain across the organisation is breached. My tolerance seems lower than most as all I can see are the sands of productivity draining away from the organisation well before the immediate threshold!Clojure AOT via Java Annotation Processor2020-06-27T00:00:00+00:002020-06-27T00:00:00+00:00https://www.lincs.dev/blog/clojure-aot<p>This post looks at a way to trigger Clojure AOT from a Java Annotation Processor running at Compilation time.</p>
<p>Example code is at <a href="https://github.com/markdingram/blog_clojure_aot">https://github.com/markdingram/blog_clojure_aot</a></p>
<h1 id="what-is-aot">What is AOT?</h1>
<p>Most commonly Clojure source files are interpreted at runtime. Clojure <a href="https://clojure.org/reference/compilation">Ahead of Time (AOT) Compilation</a> allows this to happen at compile time, outputting Java class files from the Clojure sources.</p>
<p>A few reasons are given on the Clojure website, but the main purpose I’ve come across is for Java interop, generating named classes for use by Java. For example AWS Lambda will require a class that implements <code class="language-plaintext highlighter-rouge">com.amazonaws.services.lambda.runtime.RequestStreamHandler</code> included in the Lambda ZIP.</p>
<blockquote>
<p>If the long Clojure startup times are an issue for your use case consider using the excellent <a href="https://github.com/borkdude/babashka">Babashka</a> instead of core Clojure.</p>
</blockquote>
<h1 id="what-are-java-annotation-processors">What are Java Annotation Processors?</h1>
<p>From <a href="https://en.wikipedia.org/wiki/Java_annotation">Wikipedia</a> - When Java source code is compiled, annotations can be processed by compiler plug-ins called annotation processors. Processors can produce informational messages or create additional Java source files or resources, which in turn may be compiled and processed.</p>
<p>A couple of examples:</p>
<ul>
<li>
<p><a href="https://dagger.dev/">Dagger</a> - an IoC framework. By pushing the IoC graph materialisation to compile time avoids any runtime pain of trying to debug/understand annotations in any moderately complicated Spring application. Another advantage is there is a nice exit strategy built in - run Dagger one last time and check in the generated source code.</p>
</li>
<li>
<p>Similar approach seen in <a href="https://docs.micronaut.io/latest/guide/index.html#ioc">Micronaut</a></p>
</li>
</ul>
<h1 id="example">Example</h1>
<p>Example code is at <a href="https://github.com/markdingram/blog_clojure_aot">https://github.com/markdingram/blog_clojure_aot</a></p>
<p>Instructions:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ mvn clean package
$ java -jar sample/target/sample-1.0-SNAPSHOT.jar markdingram.sample
Hello AOT!
</code></pre></div></div>
<p>The Aot annotation is added to Java package-info files:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@Aot
package markdingram.sample;
import com.github.markdingram.aot.Aot;
</code></pre></div></div>
<p>Upon detection of such an annotation the compilation is triggered using Clojure’s Java API:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>...
IFn compileFn = Clojure.var("clojure.core", "compile");
Var.pushThreadBindings(RT.map(
Compiler.COMPILE_PATH, outputPath.toString(),
Compiler.COMPILE_FILES, Boolean.TRUE));
compileFn.invoke(Symbol.create(namespace));
...
</code></pre></div></div>Mark IngramThis post looks at a way to trigger Clojure AOT from a Java Annotation Processor running at Compilation time.Bazel Redux2020-06-15T00:00:00+00:002020-06-15T00:00:00+00:00https://www.lincs.dev/blog/bazel-redux<p>A lot changed in the Bazel ecosystem since the previous 2018 post. I brought the <a href="http://github.com/markdingram/java-jni-haskell">repo</a> up to date with latest Bazel (3.3.0) & Haskell rules.</p>
<p>Notes:</p>
<ul>
<li>
<p>The .bazelversion file now specifies the exact Bazel version that was last used, to encourage reproducability - similar to the jenv (Java) / nvm (Node).</p>
</li>
<li>
<p>The WORKSPACE now sets up the @openjdk / @stackage repositories referenced by the JNI dependency: <a href="https://github.com/tweag/inline-java/blob/ab4b05aa423ef04951ff9a06275b48e662f139e0/jni/BUILD.bazel">https://github.com/tweag/inline-java/blob/ab4b05aa423ef04951ff9a06275b48e662f139e0/jni/BUILD.bazel</a>.</p>
</li>
<li>
<p>Got stuck for a while on <code class="language-plaintext highlighter-rouge">fatal error: jni.h: No such file or directory</code> from the line <code class="language-plaintext highlighter-rouge">#include <jni.h></code> in the JNI project. A comment on <a href="https://stackoverflow.com/questions/51427219/c-bazel-how-to-include-angle-bracket-system-headers/51441444#comment110318539_51441444">Stack Overflow</a> pointed to the solution - <code class="language-plaintext highlighter-rouge">strip_include_prefix = "."</code>. This changes the <code class="language-plaintext highlighter-rouge">-I</code> paths sent to the compiler, allowing the bracketed (system) header import to succeed.</p>
</li>
<li>
<p><code class="language-plaintext highlighter-rouge">/usr/bin/ld.gold: error: external/rules_haskell_ghc_linux_amd64/lib/rts/libHSrts.a(CNF.o): requires unsupported dynamic reloc 11; recompile with -fPIC</code> - resolved by switching the cc_binary to <code class="language-plaintext highlighter-rouge">linkstatic = False</code>. At a guess the Haskell libraries from stackage aren’t suitable for dynamic reloc (yet?). This could be worked around by building all the libraries ourselves with the necessary flags.</p>
</li>
</ul>Mark IngramA lot changed in the Bazel ecosystem since the previous 2018 post. I brought the repo up to date with latest Bazel (3.3.0) & Haskell rules.Notes on Debugging a C Library2020-04-27T00:00:00+00:002020-04-27T00:00:00+00:00https://www.lincs.dev/blog/debugging-libgd<p>Notes on an investigation into a problem resizing certain PNGs using the <a href="https://raw.githubusercontent.com/nginx/nginx/master/src/http/modules/ngx_http_image_filter_module.c">NGINX image filter module</a>.</p>
<p>Transparency was going AWOL upon resize, viz:</p>
<ul>
<li>
<p>insert PNG</p>
</li>
<li>
<p>insert PNG cropped bad</p>
</li>
</ul>
<p>Following are some notes on how to attempt to debug the underlying <a href="https://libgd.github.io">libgd library</a> used in the conversion.</p>
<p>This was running on a Fedora host:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo dnf groupinstall "Development Tools"
$ sudo dnf install libpng-devel libjpeg-devel gdb
$ git clone https://github.com/libgd/libgd
$ cd libgd
$ ./configure CFLAGS="-g -O0"
$ make
$ sudo make install
$ echo '/usr/local/lib' | sudo tee /etc/ld.so.conf.d/usr-local.conf
$ sudo ldconfig
$ ldconfig -p | grep libgd.so
libgd.so.3 (libc6,x86-64) => /usr/local/lib/libgd.so.3
libgd.so.3 (libc6,x86-64) => /lib64/libgd.so.3
libgd.so (libc6,x86-64) => /usr/local/lib/libgd.so
</code></pre></div></div>
<p>Create a test file that roughly follows the code paths used by Nginx usage:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ mkdir /tmp/png && cd /tmp/png
$ cp ... source.png
$ gcc -o example -lgd -lpng -lm example.c && ./example
</code></pre></div></div>
<p>example.c</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/* Bring in gd library functions */
#include "gd.h"
/* Bring in standard I/O so we can output the PNG to a file */
#include <stdio.h>
int main() {
/* Declare the image */
gdImagePtr src, dest;
int red, green, blue, transparent;
/* Declare output files */
FILE *pngout;
pngout = fopen("test.png", "wb");
src = gdImageCreateFromFile("goldbar1.png");
transparent = gdImageGetTransparent(src);
red = gdImageRed(src, transparent);
blue = gdImageBlue(src, transparent);
green = gdImageGreen(src, transparent);
printf("Transparent: %d, R:%d, G:%d, B:%d\n", transparent, red, green, blue);
dest = gdImageCreate(256, 256);
// dest = gdImageCreateTrueColor(256, 256);
// gdImageSaveAlpha(dest, 1);
// gdImagePaletteToTrueColor(src);
// gdImageCopyResampled(dest, src, 0, 0, 0, 0, 256, 256, 512, 512);
gdImageCopyResized(dest, src, 0, 0, 0, 0, 256, 256, 512, 512);
// gdImageTrueColorToPalette(dest, 1, 256);
gdImageColorTransparent(dest, gdImageColorExact(dest, red, green, blue));
gdImagePng(dest, pngout);
printf("Transparent out: %d\n", gdImageGetTransparent(dest));
/* Close the files. */
fclose(pngout);
/* Destroy the image in memory. */
gdImageDestroy(src);
gdImageDestroy(dest);
}
</code></pre></div></div>
<h2 id="gdb">GDB</h2>
<p>Useful commands:</p>
<ul>
<li>p (print - variable info)</li>
<li>b (break - add a break point)</li>
<li>s (step - next)</li>
<li>n (next - step over)</li>
<li>c (continue)</li>
<li>u (until - run past the current line)</li>
<li>f (function - run until function ends)</li>
</ul>
<p>$ gdb example
$ break 1
$ run
Breakpoint 1, main () at example.c:15
15 pngout = fopen(“test.png”, “wb”);
$</p>
<h1 id="epitaph">Epitaph</h1>
<p>I raised an issue in the LibGD project <a href="https://github.com/libgd/libgd/pull/639">https://github.com/libgd/libgd/pull/639</a>, but as per the clarification from the project maintainers the focus should turn to the implementation of the Nginx Image Filter Module which calls “gdImageCopyResampled” instead of the suggested “gdImageScale” to retain transparency.</p>
<p><a href="https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_image_filter_module.c">https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_image_filter_module.c</a></p>Mark IngramNotes on an investigation into a problem resizing certain PNGs using the NGINX image filter module.Maven Local Install2019-11-21T00:00:00+00:002019-11-21T00:00:00+00:00https://www.lincs.dev/blog/maven-local-install<p>Very occasionally the need arises to push a 3rd party JAR that isn’t publically hosted on Maven into a private repo hosted on S3. The trick is to get Maven to produce the necessary metadata files rather than editing by hand.</p>
<p>Here’s an example with the Amazon ‘in-app-purchasing-2.0.76.jar’ downloadable from <a href="https://developer.amazon.com/apps-and-games/sdk-download">https://developer.amazon.com/apps-and-games/sdk-download</a>, but not published to any of the usual Maven public repos.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ curl -O https://amazonadsi-a.akamaihd.net/public/Amazon-Mobile-App-SDK-by-Platform/Amazon-Android-SDKs.zip
$ unzip Amazon-Android-SKDs.zip
$ mkdir local-maven-repo
$ mvn deploy:deploy-file -DgroupId=com.amazon -DartifactId=in-app-purchasing -Dversion=2.0.76 -Durl=file:./local-maven-repo -DrepositoryId=local-maven-repo -DupdateReleaseInfo=true -Dfile=Amazon-Android-SDKs/AmazonInAppPurchasing/in-app-purchasing-2.0.76.jar
INFO] Scanning for projects...
[INFO]
[INFO] ------------------< org.apache.maven:standalone-pom >-------------------
[INFO] Building Maven Stub Project (No POM) 1
[INFO] --------------------------------[ pom ]---------------------------------
[INFO]
[INFO] --- maven-deploy-plugin:2.7:deploy-file (default-cli) @ standalone-pom ---
Uploading to local-maven-repo: file:./local-maven-repo/com/amazon/in-app-purchasing/2.0.76/in-app-purchasing-2.0.76.jar
Uploaded to local-maven-repo: file:./local-maven-repo/com/amazon/in-app-purchasing/2.0.76/in-app-purchasing-2.0.76.jar (100 kB at 1.8 MB/s)
Uploading to local-maven-repo: file:./local-maven-repo/com/amazon/in-app-purchasing/2.0.76/in-app-purchasing-2.0.76.pom
Uploaded to local-maven-repo: file:./local-maven-repo/com/amazon/in-app-purchasing/2.0.76/in-app-purchasing-2.0.76.pom (401 B at 134 kB/s)
Downloading from local-maven-repo: file:./local-maven-repo/com/amazon/in-app-purchasing/maven-metadata.xml
Uploading to local-maven-repo: file:./local-maven-repo/com/amazon/in-app-purchasing/maven-metadata.xml
Uploaded to local-maven-repo: file:./local-maven-repo/com/amazon/in-app-purchasing/maven-metadata.xml (309 B at 103 kB/s)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.116 s
[INFO] Finished at: 2019-11-21T10:19:34Z
[INFO] ------------------------------------------------------------------------
$ tree local-maven-repo
local-maven-repo
└── com
└── amazon
└── in-app-purchasing
├── 2.0.76
│ ├── in-app-purchasing-2.0.76.jar
│ ├── in-app-purchasing-2.0.76.jar.md5
│ ├── in-app-purchasing-2.0.76.jar.sha1
│ ├── in-app-purchasing-2.0.76.pom
│ ├── in-app-purchasing-2.0.76.pom.md5
│ └── in-app-purchasing-2.0.76.pom.sha1
├── maven-metadata.xml
├── maven-metadata.xml.md5
</code></pre></div></div>
<p>The local-maven-repo can then be synced across to an S3 bucket and used as a Maven repo, for example:</p>
<p><a href="https://github.com/s3-wagon-private/s3-wagon-private">https://github.com/s3-wagon-private/s3-wagon-private</a></p>
<p><a href="https://tech.asimio.net/2018/06/27/Using-an-AWS-S3-Bucket-as-your-Maven-Repository.html">https://tech.asimio.net/2018/06/27/Using-an-AWS-S3-Bucket-as-your-Maven-Repository.html</a></p>Mark IngramVery occasionally the need arises to push a 3rd party JAR that isn’t publically hosted on Maven into a private repo hosted on S3. The trick is to get Maven to produce the necessary metadata files rather than editing by hand.Bazel to the Future2018-09-27T00:00:00+00:002018-09-27T00:00:00+00:00https://www.lincs.dev/blog/bazel-to-the-future<p>One of the promises of <a href="https://bazel.build/">Bazel</a> is “One tool, multiple languages”. Let’s validate this claim - but which languages to choose?</p>
<p>I’ve recently been learning Haskell and came across this <a href="https://www.tweag.io/posts/2018-02-28-bazel-haskell.html">post</a> by Tweag announcing Bazel rules for Haskell. Another project by Tweag provides Haskell <a href="https://github.com/tweag/inline-java/tree/master/jni">JNI bindings</a> and is setup with a Bazel build, so the goal is to create a single Bazel build that produces a Java application calling through JNI into Haskell. To keep it simple the Haskell code will calculate a Fibonacci number.</p>
<p>Resulting repo is here: <a href="http://github.com/markdingram/java-jni-haskell">http://github.com/markdingram/java-jni-haskell</a>, tested on Fedora.</p>
<p>To run:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>bazel run //fibhs:main 10
bazel run //fibjava:main 10
...
fib(10) = 89
</code></pre></div></div>
<h1 id="part-1---pure-haskell">Part 1 - Pure Haskell</h1>
<p>First task was to create a pure Haskell library/application for Fibonacci:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>fib :: Int -> Int
fib 0 = 1
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
</code></pre></div></div>
<p>This proved to be straightforward, lifting the template from the <a href="https://github.com/tweag/rules_haskell_examples">rules_haskell_examples</a> repo:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bazel run //fibhs:main 10
...
fib(10) = 89
</code></pre></div></div>
<h1 id="part-2---jni">Part 2 - JNI</h1>
<p>Here the real excitement began. I found this <a href="https://github.com/mhlopko/bazel-jni-example">repository</a> with an example of a C++ JNI build, so that was the starting point.</p>
<p>Importing the upstream bazel JNI build was straightforward, following this section in the WORKSPACE</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git_repository(
name = "tweag_inline_java",
remote = "https://github.com/tweag/inline-java.git",
tag = "v0.8.4")
</code></pre></div></div>
<p>The JNI library can then be referenced as a dependency with <code class="language-plaintext highlighter-rouge">@tweag_inline_java:jni/jni</code></p>
<p>Before calling any Haskell methods, the <a href="https://downloads.haskell.org/~ghc/8.2.2/docs/html/users_guide/ffi-chap.html#making-a-haskell-library-that-can-be-called-from-foreign-code">Haskell FFI</a> requires that the hs_init method (from HsFFI.h) method is called from C prior to any other calls. The standard JNI_OnLoad method in <code class="language-plaintext highlighter-rouge">init.c</code> is defined to trigger this at library load time.</p>
<p>From this point on I ran into a time consuming series of irritants caused by what seemed to be a combination of my lack of experience with Bazel/Haskell and misleading/incomplete documentation:</p>
<ul>
<li>
<p>Unable to find the JNI header files in a subdirectory which turned out to be <a href="https://github.com/bazelbuild/bazel/issues/5497">https://github.com/bazelbuild/bazel/issues/5497</a>, I’m sure there is a more elegant approach but adding <code class="language-plaintext highlighter-rouge">../</code> to the includes paths got past it.</p>
</li>
<li>
<p>a blind alley triggered by the following paragraph in the Bazel docs for cc_binary, linkstatic:</p>
<p><em>The presence of this flag means that linking occurs with the -shared flag to gcc, and the resulting shared library is suitable for loading into for example a Java program. However, for build purposes it will never be linked into the dependent binary, as it is assumed that shared libraries built with a cc_binary rule are only loaded manually by other programs, so it should not be considered a substitute for the cc_library rule. For sake of scalability we recommend avoiding this approach altogether and simply letting java_library depend on cc_library rules instead.</em></p>
<p>Turns out the cc_library doesn’t retain the dynamic link to the Haskell runtime (check with ldd), leading to an error when trying to load the library from Java down the line. cc_binary does include that link so stick with that.</p>
</li>
<li>
<p>The non threaded Haskell runtime hangs when loaded from Java, using the Haskell threaded runtime was needed. The approach described here <a href="https://github.com/tweag/rules_haskell/issues/437">https://github.com/tweag/rules_haskell/issues/437</a> was the key.</p>
</li>
<li>
<p>Native libraries in <code class="language-plaintext highlighter-rouge">data</code> are handled differently by Bazel Java library vs binary - see <a href="https://github.com/bazelbuild/bazel/issues/1146">https://github.com/bazelbuild/bazel/issues/1146</a>.</p>
</li>
<li>
<p>one self inflicted interlude, while wrestling with the above I spent some time seeing if an OSX build could be added, it soon became apparent this wouldn’t be straightforward. As per https://github.com/tweag/inline-java/issues/1 switching to the cpphs preprocessor is needed, however the MIN_VERSION_singletons macro wasn’t provided to cpphs leading to build errors. This was contrary to what was expected from a similar <a href="https://github.com/glaebhoerl/type-eq/issues/3">issue</a> but I didn’t investigate any further.</p>
</li>
</ul>
<h1 id="dependency-graph">Dependency Graph</h1>
<p>One nice feature supported by Bazel is producing a dependency graph:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bazel query --nohost_deps --noimplicit_deps "deps(//fibjava:main)" --output graph > graph.dot
$ /usr/bin/dot -Tpng graph.dot -o deps.png
</code></pre></div></div>
<p>Resulting graph for this exercise:</p>
<p><img src="/assets/deps.png" alt="Deps" /></p>
<h1 id="conclusion">Conclusion</h1>
<p>Bazel has lived up to its claim, a single command can build & run either Haskell or Java/JNI/Haskell binaries.</p>
<p>The support for tagged import of the upstream Git JNI repository shows that Bazel, despite its monorepo origins, may be able to support multirepo approaches too.</p>
<h1 id="alternate-timeline">Alternate Timeline</h1>
<p><a href="https://eta-lang.org">Eta</a> with its own <a href="https://github.com/jin/rules_eta">Bazel rules</a> is an alternate approach, running Haskell direct on the JVM. Maybe for later investigation..</p>Mark IngramOne of the promises of Bazel is “One tool, multiple languages”. Let’s validate this claim - but which languages to choose?