I regularly visit Seoul, and for the last couple of years I've been doing segments from the Seoul Trail, a series of walks that add up to a 150km circuit around the outskirts of Seoul. If you like hiking I recommend it, it's mostly through the hills and wooded areas surrounding the city or parks within the city and the bits I've done thus far have mostly been very enjoyable. Everything is generally well signposted and easy to follow, with varying degrees of difficulty from completely flat paved roads to very hilly trails.
The trail had been divided into eight segments but just after I last visited the trail was reorganised into 21 smaller ones. This was very sensible, the original segments mostly being about 10-20km and taking 3-6 hours (with the notable exception of section 8, which was 36km) which can be a bit much (especially that section 8, or section 1 which had about 1km of ascent in it overall). It does complicate matters if you're trying to keep track of what you've done already though so I've put together a quick table:
Original | Revised |
---|---|
1 | 1-3 |
2 | 4-5 |
3 | 6-8 |
4 | 9-10 |
5 | 11-12 |
6 | 13-14 |
7 | 15-16 |
8 | 17-21 |
This is all straightforward, the original segments had all been arranged to start and stop at metro stations (which I think explains the length of 8, the metro network is thin around Bukhansan what with it being an actual mountain) and the new segments are all straight subdivisions, but it's handy to have it written down and I figured other people might find it useful.
About a year ago I wrote a post about a need for a good keyboard. And that I am thinking of making mechanical one.
During last week I built one.
Twenty years ago I had a problem with repetitive strain injury (RSI in short). Typing on normal PC keyboard was quite impossible for me. Someone recommended buying one of those “fancy” ergonomic ones. And that’s how I ended with Microsoft Natural Keyboard Elite.
It was a great help. Got used to layout and my RSI problems were going away. Then bought second one to have one at home and one at work.
Time passed, keycaps were worn up so I got another keyboard — you can read the story in last year post about keyboards.
During last year I looked at several keyboards. Checked KMK, QMK, ZMK and other solutions of keyboard firmware. Watched countless videos on how to make keyboard and read many articles about it.
One layout caught my eye: TGR Alice as it was quite simple ergonomic one. Then found Arisu which added cursor keys and did some other changes. And finally Adelheid which added function keys.
As those layouts are available on MIT license I took Adelheid and altered its layout a bit:
I had some parts already (ordered then to my MS4KMech project which I abandoned in meantime) and decided that my first keyboard will be as cheap as possible:
So cost of parts was about 20 EUR.
As a way to keep costs down I printed plate in two pieces (Ender 3 table size was a limit) and used some screws and plates to keep them together.
Mounted switches and started soldering. Simple diode to diode to create rows. And it is visible that I watched some video after first 3 rows of one half ;D
Instead of forming diodes I soldered them and then use small pliers to form rows. Later started cutting legs before soldering.
Columns were easier. Spool of kynar and stripping isolation using nails did the job.
Added a bunch of colour wires to be able to test did my soldering resulted in something working. Grabbed Raspberry Pico from a drawer, flashed it with QMK firmware and testing bench was ready:
And it almost worked! One key was not working. Turned out that diode was soldered in wrong direction. Happens.
For second half I took a bit different strategy. Decided to solder columns first as they took me more time than rows. Whole half went much faster then first.
Then all that left was finding a way to connect both halves (I did not wanted to make a split keyboard) and connect rows. I used pieces of old universal PCB and screws:
I bought Ultimate Pico RP2040 especially for keyboard project months ago. Compared to Raspberry Pico this board had USB-C port and one line more (GP29). Also less GND pins so 17 GPIO lines are on one side. My keyboard has 16 columns so this allowed me to have all columns connected on one side of board and use second side connections for rows:
I may do not look nice but it works :D
I flashed QMK and started testing. Turned out that one column is not working. Soldered wire and it was fine.
3 other keys were not working. One needed soldering but other ones looked fine.
QMK has a solution for it: “Console debugging”. I added a bit of code to
keymap.c
file:
bool process_record_user(uint16_t keycode, keyrecord_t *record) {
// If console is enabled, it will print the matrix position and status
// of each key pressed
#ifdef CONSOLE_ENABLE
uprintf("KL: kc: 0x%04X, row: %2u, col: %2u, pressed: %u, "
"time: %5u, int: %u, count: %u\n",
keycode, record->event.key.col, record->event.key.row,
record->event.pressed, record->event.time, record->tap.interrupted,
record->tap.count);
#endif
return true;
}
This allowed me to notice that both Enter and End keys are seen. Turned out that I wrote wrong row/column values for both of them.
I made some mistakes with this keyboard. They were expected as usual with first attempts.
First mistake: 2mm plate. Should had do 1.5mm to make switches keep better. Total thickness can be even 3-5 mm but with holes done in a way that switches hold properly. Also it may make keyboard less bendy.
Second ones - holes for stabilisers. I have a few 2u - 2.75u keys and they should get stabilisers. But I did holes in wrong way so the ones I have do not fit. Will see how it affects keyboard use.
Third one: soldering. I do not remember when last time I soldered so many points. So some of other are shitty. Just had to fix one more…
Community behind mechanical keyboards created many tools. Let me try to list ones I used.
First comes QMK firmware which allows to forget about programming microcontrollers as keyboard is done as simple JSON file and one C file with keymap definition.
Then Keyboard Layout Editor which allows to define layout of keys in any crazy way you want. Raw data from it is a base for several other tools.
Keyboard Firmware Builder is a simplest way to manage wiring of keyboard. You paste raw data from previous tool and start simplifying wiring. It can also generate ready to use QMK firmware images for several popular micro controllers (but no RP2040 I used).
Plate & Case Builder also takes layout data. And allows to choose switch/stabilizer types plus a bunch of other options. Then generates plate for your layout with option to download it in SVG, DXF, EPS file formats. I took SVG, extruded to 2mm and had a base plate for 3d printing.
The question for the end: was it worth the time? I think that it was. Learnt new things, got something working. Something I did on my own. Sure, it is far from being perfect but gives me area for improvements.
I have two space keys and already see that for most of time I use right one. There are no layers defined yet and I have a key for it. Some keycaps may get replaced with other ones (like Calculator one next to left Space begs for lower one).
I have links to many interesting layouts so who knows, maybe I will make a new keyboard soon. Split, orthogonal one. With volume knob. Will see.
About seven years ago I visited friends. They were playing some game and told me that I may like it. I asked some questions, looked at their play etc.
Few days later I spent 70 PLN and bought Factorio.
The idea of Factorio is simple: you crash land on some planet and do everything needed to send rocket to the space. Mine resources, turn ore into plates into electronic components into assembly machines. And use those machines to build more of everything.
And walk around to look for more resources.
And fight local fauna as they do not like you or your effect on their planet. Pollution made by your factory speeds up their evolution so you need to develop stronger weapons etc.
It is easy to get addicted to the game. You start game ‘for an hour’ in the afternoon and ‘one hour later’ you see sunrise through the window. And your legs remind you that you were sitting in front of computer for 12+ hours straight.
Been there, done that. Then learnt how to make pauses and how to put some limits.
Like many games today Factorio has set of achievements. Most of them you get during normal play. Some require special preparations.
I remember doing “Getting on track like a pro” one. It requires building a locomotive within the first 90 minutes of the game. Done on second attempt cause I did not noticed that locomotive needs to be put on the rail and I went out of time on first attempt.
The one I liked the most was “Lazy bastard” where you can manually craft no more than 111 items. This requires counting each move at the beginning of the game as there are only 10 items reserve. Great way to learn how to make factory produce things.
Took me over six years to get all achievements.
To win the game you need to send rocket to space. And there are three achievements for it:
And there is funny one — you can send yourself into space and game handles it properly. The way is to put vehicle (car, tank, locomotive) into a rocket and then ‘take a seat’ in vehicle. You get the usual animation of starting rocket but from different perspective ;D
In October 2024 Factorio team released 2.0 version of the game. And “Space Age” DLC.
I decided to play 2.0 first to see what changed first. And there were many changes! And new achievements :D
Took me some time but once I finished I bought “Space Age” DLC and started playing again. Without rush or pressure to get complex factory.
After 260 hours I finished but did not had a feeling that I won anything. In normal game you build a factory to send satellite to the space. In DLC you have to make a useless ship and send it “where no man has been before”.
But that does not give anything. Going to the “shattered planet” gives 12th science pack but it is needed only to speed up research.
But basic game is just a piece of game. The most important piece of course but there are so many mods for Factorio…
Some are simple ‘quality of life’ ones, some are ‘why this is not in game’ (like Rate Calculator).
And there are complex ones like ‘Krastorio 2’, ‘Space Exploration’ or ‘Warptorio’ which change game, add lot of buildings, technologies, change winning requirements etc.
You can enable countless mods to make your game completely different. Easier, harder, more complex etc.
One of things I like is multiplayer. I had own server where I played countless hours with friends. We did ‘Krastorio 2’, we did huge megabase.
We rebuilt half of megabase after visit of biters — it was ‘rebuild or start from scratch’ moment and we decided that starting from scratch is boring. Turned out that rebuild took only few hours once we got all reactors running so electricity was not a problem.
One of great things around Factorio is community. Modders, youtubers, speedrunners (I learnt many tricks watching speedruns), people on forums etc.
I remember how one day Factorio was misbehaving on my system. Everything else was working fine, problem was only visible when I played the game. Reported a bug, got contacted by someone from development team and we joined multiplayer game. Some minutes later I got “Please check stability of your system. Run memtest or something”. And few hours later memtest86 shown some issue with memory config.
Money spent on Factorio was the best spend cash when it comes to my game related expenses.
And there are still four achievements to get :P
In 2012 I wrote “What interest me in ARM world” post. Listed hardware I played with and some words about what I was looking for.
Decided that it is time to write 2025 one. About AArch64.
When it comes to AArch64 world I skipped most of Seriously Bad Computers and concentrated on servers.
During all those years I have used several servers. From X-Gene to NVidia Grace. Some for longer, some for shorter tasks. Most of those systems were in Linaro or Red Hat labs and I used them remotely.
At home I had AppliedMicro Mustang. It served me as a desktop in 2014 (for a few days) and as main development system for a few years.
Of course there were SBCs too. Pine64, RockPro64, Espressobin, Honeycomb, NanoPC-T6. Even Raspberry/Pi 3 — which I bought on announcement day to check how bad it was.
I do not use any of them to run some service at home.
Used Espressobin as my router for a month or two. Honeycomb was my main development box (until Macbook replaced it). Rest of them I bought to check how SBC market looks like.
I use Apple Macbook Pro (14”, M1 Pro, 2021) as my work laptop (Red Hat provided). It runs Fedora 41 Asahi Remix and serves me for almost 3 years. During that time I ran MacOS maybe four times. Most of time I use it as headless machine over SSH+Wayland connection. Nice, fast machine for my Linux/AArch64 work.
Still has several not supported things. No Thunderbolt, no microphone etc. I do not blame Asahi team for it — they have done great work. I learnt to live with those because system performance pays for it.
After 20+ years of working with Arm hardware I decided to stop being early adopter. Sure, it is fun to be one of the first users of any platform but I learnt to value my time.
When I pay, I want hardware that works. Let it boot Debian ‘testing’ or Fedora ‘rawhide’ from generic install image. Being able to run *BSD would be a nice bonus showing that platform was well tested.
Hardware which boots after pressing power button. And shut downs completely on poweroff
command — including all peripherals and expansion cards.
I do not care much will it use ACPI or DeviceTree to describe hardware to the operating system. As long as it is done in sane way. Source code for firmware is optional when it works properly.
On a new Arm system I would run BSA ACS (or ask someone with hardware). To check did vendor even looked at BSA specification before doing hardware.
Who knows, maybe one day I will use Arm system for home services.
Instead of yet another x86-64 thin terminal. Which works. Out of the box.
During last years Arm released some specifications in an effort to help organise standardise their market. We got Server Base System Architecture (SBSA), then Base System Architecture (BSA) and finally PC Base System Architecture (PC-BSA) one. Both SBSA and PC-BSA extend BSA (SBSA 7.0 was rebased on top of BSA).
Did these documents changed the market? That’s a discussion for another time.
Each of mentioned specifications comes with a compliance checklist referencing document sections such as B_PE_11 which states:
Each PE must implement a minimum of six breakpoints, two of which must be able to match virtual address, contextID or VMID.
To visualize these checklists I created the Arm BSA/PC-BSA/SBSA checklist page where this data is presented as a table. How it is generated will be explained below.
Following the checklist manually is a difficult task so Arm released Architecture Compliance Suites (ACS in short) for BSA and SBSA specifications which run tests and tell whether your hardware is compliant. PC-BSA so far does not have own ACS yet.
NOTE: I ran compliance suites only on UEFI+ACPI systems so do not know how BSA ACS behaves on DeviceTree based ones.
At the start, ACPI tables are parsed and hardware details are checked, such as GIC, SMMU, PCIe etc. You then get a summary of the information:
Creating Platform Information Tables
PE_INFO: Number of PE detected : 4
GIC INFO: GIC version : v3
GIC_INFO: Number of GICD : 1
GIC_INFO: Number of GICR RD : 1
GIC_INFO: Number of ITS : 1
TIMER_INFO: System Counter frequency : 1000 MHz
TIMER_INFO: Number of system timers : 0
WATCHDOG_INFO: Number of Watchdogs : 1
PCIE_INFO: Number of ECAM regions : 1
PCIE_INFO: Number of BDFs found : 4
PCIE_INFO: Number of RCiEP : 2
PCIE_INFO: Number of RCEC : 0
PCIE_INFO: Number of EP : 1
PCIE_INFO: Number of RP : 1
PCIE_INFO: Number of iEP_EP : 0
PCIE_INFO: Number of iEP_RP : 0
PCIE_INFO: Number of UP of switch : 0
PCIE_INFO: Number of DP of switch : 0
PCIE_INFO: Number of PCI/PCIe Bridge : 0
PCIE_INFO: Number of PCIe/PCI Bridge : 0
SMMU_INFO: Number of SMMU CTRL : 1
SMMU_INFO: SMMU index 00 version : v3.1
Peripheral: Num of USB controllers : 1
Peripheral: Num of SATA controllers : 1
Peripheral: Num of UART controllers : 1
The format may differ a bit between both BSA ACS and SBSA ACS but data is there.
Then tests run in groups (PE, Memory map, GIC, SMMU, PCIe etc.). Each test can return one of 3 values: PASS, FAIL or SKIPPED (usually when the requirements to run the test are not met):
28 : Check Fine Grain Trap Support
Failed on PE - 0
S_L7PE_01
Checkpoint -- 1 : Result: FAIL
29 : Check for ECV support
Failed on PE - 0
S_L7PE_02
Checkpoint -- 1 : Result: FAIL
30 : Check for AMU Support
Failed on PE - 0
S_L7PE_03
Checkpoint -- 1 : Result: FAIL
31 : Checks ASIMD Int8 matrix multiplc : Result: PASS
32 : Check for BFLOAT16 extension : Result: PASS
33 : Check PAuth2, FPAC & FPACCOMBINE : Result: PASS
34 : Check for SVE Int8 matrix multiplc : Result: PASS
35 : Check for data gathering hint : Result: PASS
36 : Check WFE Fine tune delay feature
Recommened WFE fine-tuning delay feature not implemented
S_L7PE_09
Checkpoint -- 2 : Result: SKIPPED
As you can see when test does not pass you get a tag (like S_L7PE_01) pointing to relevant specification.
Each ACS can be run in verbose mode by adding “-v 1” to the command line. Amount of detail is increased to the level that logging output is highly recommended. You can compare my older logs:
In my sbsa-ref-status repository I have scripts that gather and parse data for QEMU’s SBSA Reference Platform (sbsa-ref in short). The result is set of YAML files (status-bsa.yml and status-sbsa.yml) that contain information how the tests went:
31:
level: '7'
status:
cortex-a57: FAIL
cortex-a72: FAIL
max: PASS
neoverse-n1: FAIL
neoverse-n2: PASS
neoverse-v1: PASS
tags: S_L7PE_04
title: Checks ASIMD Int8 matrix multiplc
As you see test for S_L7PE_04 passed on some cpu core models and failed on old ones. This pattern continues for other tests and tags. Several entries have only SKIPPED values because the hardware lacked something required to run them.
Those scripts should work with results from other hardware.
Some tags do not have coverage in the ACS. Some tests check for things that are not present in checklists present in the specifications. In these cases it is important to look into ACS documentation:
Both of these expand on the checklists from the specifications with additional information. Which SBSA level test was written for, is it tested in the UEFI or Linux environment, is additional exerciser card required etc.
I used both to expand status-(s)bsa.yml files to ensure all tested entries are listed.
To generate a page with checklist table I use another YAML file: xbsa-checklist.yml. This file maps tags from the specifications into groups and subgroups and keeps information on whether tag required for BSA v1/v2, PC-BSA or SBSA levels. I wrote it by hand and it needs to be updated with every specification update.
Next generate-xbsa.py needs to be run which generates HTML page with the table.
Changes to ACS may alter the test numbers or tags used. I reported several issues against both BSA ACS and SBSA ACS about it and they were handled. At this moment all tests have tags assigned.
If the generated page lists “ACS only tests” entries it means one of status-*.yml file needs to be updated because some unhandled tag was used. Or ACS change had error in tag name.
Tests may be renamed — in such cases status files will get new ones. When test numbers change (which is rare) then manual checking may be required.
According to Arm developers (BSA ACS issue #395) there will be PC-BSA ACS in first week of December. Once it is released, a new parser will need to be written (like one for BSA ACS or for SBSA ACS) and the page generator updated to use this information.
This page is one of projects I plan to abandon in 2025. It was a useful tool for checking what needed to be done for the SBSA Reference Platform, either on virtual hardware (QEMU) or in firmware (TF-A + EDK2).
None of Arm hardware I use at home is SBSA compliant. Running BSA ACS on some of those causes them to hang. I do not expect this situation to change in the coming months.
The current page will remain online but I do not plan to invest time in updating it.
Some time ago I was informed that Red Hat will not prolong membership for Linaro DataCentre Group. Which for me means end of my 2nd adventure with Linaro.
I was in Linaro from April 2010 to end of May 2013 and then from April 2016 to end of current month. So two times.
But I was leaving Linaro twice in past. First in October 2012 when someone decided that it is not the time yet for me to go. And then in May 2013 when I finally left.
Those eight and half years of Linaro work were a good time.
First we were doing OpenStack. Went from “needs hacks” to be ready out of the box for use. During 6 years I did hundreds of patches and countless reviews.
In meantime cloud providers started providing Arm instances and pressure to keep OpenStack working became smaller. Why maintain whole infrastructure when you can rent virtual one?
Part of that jobs was extending CirrOS images to behave properly on UEFI systems (AArch64 and x86-64). Defined CI jobs, handled migration to Github and helped with several releases.
There was some work done on distribution images as well.
Then moved to SBSA Reference Platform. This was interesting in the beginning. I had a feeling of being more of a manager than developer. Had to collect ideas from everyone who worked on it and get something working and being acceptable by upstream.
This ended as internal versioning of platform and then we started adding more
features and upgrading platform whenever something interesting landed in QEMU. Now sbsa-ref
uses Neoverse-N2 cpu by default, can be used with NUMA setup,
have defined cpu topology and more.
Learnt firmware stuff (TF-A, EDK2), reviewed countless patches etc.
Too bad that during those years I was not able to buy any SBSA compliant hardware below 2000 EUR :(
One of my tasks during whole Linaro time was handling Continuous Integration. Defining new jobs, taking care of old ones, maintaining machines we used as Jenkins runners.
This took me into interesting places. There was a lot of Python. I even managed
to be involved in manylinux
images used to build Python packages.
I am moving back to Red Hat. There are some positions open where I may fit so have to take a look and choose.
About two years ago I got an idea of gathering information about AArch64 SoCs. Mostly to have a place to show how many of them are still using outdated v8.0 cpu cores.
During those years many things changed. And there were funny moments too.
Current stats of table are:
ISA level | Amount of SoC entries |
---|---|
v8.0 | 89 |
v8.1 | 1 |
v8.2 | 58 |
v8.4 | 1 |
v8.5 | 6 |
v8.6 | 8 |
v9.0 | 24 |
v9.2 | 8 |
Architecture updates are present on a market. More and more SoC vendors go for newer designs instead of staying in the past. Most of those cases are mobile phones. Cloud systems also moved to the new designs as we have Arm Neoverse-V2 based instances available at several places.
As most of SoC vendors switched to use Arm designs I decided to create a table which would show some more information about them. And created AArch64 cpu core information table.
It lists all Arm designs (Cortex-A/X, Neoverse) with direct links to their TRM documentation, ID numbers, memory sizes, supported page sizes, SVE vector length and level on AArch32 support.
Code is open enough to handle also designs from other SoC vendors but I have not seen such documentation being present in public.
Anyone can submit a new entry as a new issue on Github. And most people used that way. It is also recommended way.
There is a column about AArch32 support. Most of v8.x SoCs supports running 32-bit binaries, some support booting 32-bit kernels. v9.x ones finally get rid of any kind of 32-bit support.
At some point I added links to Arm core TRMs (Technical Reference Manuals). Then added information when SoC was announced so it can be seen how new/old it is.
There are funny moments sometimes after several updates. In January 2023 I added Alibaba Yitian 710 SoC to the table. It was the first Neoverse-N2 system there.
About month later some random person wrote to me on IRC:
I’m glad you put the Yitian 710 in your table, now I can point people at it, and tell them to look at that for what processor its using.
Other interesting example: Longhorn published information for NVIDIA Grace SoC when those systems were quite rare.
Or Jeff Geerling started reviewing AmpereOne system and published data for Ampere1A (AC04) cpu core which was not yet in a table. There was public information that AC04 exists but no cpuinfo data for it.
When I created this table I have not thought that it will get that popular. Now it feels like when device with a new SoC appears on a market someone sends me data for a new entry.
I automated most of work related to maintaining the table so project will stay for as long as people will send me data for it.
More and more things move to a cloud.
Times when people used traditional servers have passed.
Who did not heard such sentences in previous years? So time for me to move to the cloud as well.
Powered by Arm cloud.
Call me old fashioned but I like to self host my services. At least some of them (web, private git repos, mail).
But a few months ago I got an e-mail from OVH:
As part of our project to make our datacentres hyper-resilient, we are starting the transformation of the RBX1 datacentre. Our aim is to modernise our infrastructure, in particular, the building where we house servers hosting all our cloud services. Despite our utmost efforts, this modernisation cannot be carried out without affecting your services.
We are writing to inform you that, as a result, the following Bare Metal servers will be closing down on 31 December 2024
I checked their prices for newer machines and decided that it is a bit too much. So it was time to move somewhere else.
I looked at options and then realised that someone mentioned ‘Free tier’ on Oracle cloud:
For free. I dusted my old account, checked is it “Pay As Yo Go” one and started playing with it.
First try was “let use a quarter”: one cpu core, 6 GB of memory and 50 GB of storage. Turned out that is quite enough to run several websites with small traffic. After some tweaks here and there machine got some testing and works.
This blog is hosted on Oracle cloud for a few weeks now and so far no one complained.
My previous servers were handling my mail in standard way. Postfix as SMTP, Dovecot for IMAP. With added stuff on top like Amavis, ClamAV, SpamAssasin, OpenDKIM etc. etc. And Roundcube for webmail if someone needs.
But my system administrator never were top of the shelf ones. It worked, I tweaked it from time to time etc. So this time decided to go with some “all-in-one” solution.
Went with mailcow - a bunch of mail services running as containers, handling things and providing me with some WebUI for configuration.
Added accounts, aliases, setup IMAP sync jobs so all users had their mail present from previous server. Handled DNS changes and server went online.
In meantime I checked for services I run:
My installation of Forgejo remembers Gitea times. So I took some time, cleaned configuration to get rid of gitea names from it. Now it is running with all old repositories.
Factorio multiplayer went to trash. There is no Linux/arm64 binary available.
Discord bot was Python. Migrated fine.
Other shit? Went through it, killed some, migrated some to other places.
How well will it serve my needs? Time will show. My first server had 4 GB ram and dual core Atom cpu (2 cores, 4 threads). And rotating rust as storage. The last one was i5-750 (4 cores) with 16 GB of memory and rotating rust.
So current duo of 2 cores with 8 GB each should work for some time.
Again this year, Arm offered to host us for a mini-debconf in Cambridge. Roughly 60 people turned up on 10-13 October to the Arm campus, where they made us really welcome. They even had some Debian-themed treats made to spoil us!
For the first two days, we had a "mini-debcamp" with disparate group of people working on all sorts of things: Arm support, live images, browser stuff, package uploads, etc. And (as is traditional) lots of people doing last-minute work to prepare slides for their talks.
Saturday and Sunday were two days devoted to more traditional conference sessions. Our talks covered a typical range of Debian subjects: a DPL "Bits" talk, an update from the Release Team, live images. We also had some wider topics: handling your own data, what to look for in the upcoming Post-Quantum Crypto world, and even me talking about the ups and downs of Secure Boot. Plus a random set of lightning talks too! :-)
Lots of volunteers from the DebConf video team were on hand too (both on-site and remotely!), so our talks were both streamed live and recorded for posterity - see the links from the individual talk pages in the wiki, or http://meetings-archive.debian.net/pub/debian-meetings/2024/MiniDebConf-Cambridge/ for the full set if you'd like to see more.
Again, the mini-conf went well and feedback from attendees was very positive. Thanks to all our helpers, and of course to our sponsor: Arm for providing the venue and infrastructure for the event, and all the food and drink too!
Photo credits: Andy Simpkins, Mark Brown, Jonathan Wiltshire. Thanks!
It's been a while since I've posted about arm64 hardware. The last machine I spent my own money on was a SolidRun Macchiatobin, about 7 years ago. It's a small (mini-ITX) board with a 4-core arm64 SoC (4 * Cortex-A72) on it, along with things like a DIMM socket for memory, lots of networking, 3 SATA disk interfaces.
The Macchiatobin was a nice machine compared to many earlier systems, but it took quite a bit of effort to get it working to my liking. I replaced the on-board U-Boot firmware binary with an EDK2 build, and that helped. After a few iterations we got a new build including graphical output on a PCIe graphics card. Now it worked much more like a "normal" x86 computer.
I still have that machine running at home, and it's been a reasonably reliable little build machine for arm development and testing. It's starting to show its age, though - the onboard USB ports no longer work, and so it's no longer useful for doing things like installation testing. :-/
So...
I was involved in a conversation in the #debian-arm IRC channel a few weeks ago, and diederik suggested the Radxa Rock 5 ITX. It's another mini-ITX board, this time using a Rockchip RK3588 CPU. Things have moved on - the CPU is now an 8-core big.LITTLE config: 4*Cortex A76 and 4*Cortex A55. The board has NVMe on-board, 4*SATA, built-in Mali graphics from the CPU, soldered-on memory. Just about everything you need on an SBC for a small low-power desktop, a NAS or whatever. And for about half the price I paid for the Macchiatobin. I hit "buy" on one of the listed websites. :-)
A few days ago, the new board landed. I picked the version with 24GB of RAM and bought the matching heatsink and fan. I set it up in an existing case borrowed from another old machine and tried the Radxa "Debian" build. All looked OK, but I clearly wasn't going to stay with that. Onwards to running a native Debian setup!
I installed an EDK2 build from https://github.com/edk2-porting/edk2-rk3588 onto the onboard SPI flash, then rebooted with a Debian 12.7 (Bookworm) arm64 installer image on a USB stick. How much trouble could this be?
I was shocked! It Just Worked (TM)
I'm running a standard Debian arm64 system. The graphical installer ran just fine. I installed onto the NVMe, adding an Xfce desktop for some simple tests. Everything Just Worked. After many years of fighting with a range of different arm machines (from simple SBCs to desktops and servers), this was without doubt the most straightforward setup I've ever done. Wow!
It's possible to go and spend a lot of money on an Ampere machine, and I've seen them work well too. But for a hobbyist user (or even a smaller business), the Rock 5 ITX is a lovely option. Total cost to me for the board with shipping fees, import duty, etc. was just over £240. That's great value, and I can wholeheartedly recommend this board!
The two things that are missing compared to the Macchiatobin? This is soldered-on memory (but hey, 24G is plenty for me!) It also doesn't have a PCIe slot, but it has sufficient onboard network, video and storage interfaces that I think it will cover most people's needs.
Where's the catch? It seems these are very popular right now, so it can be difficult to find these machines in stock online.
FTAOD, I should also point out: I bought this machine entirely with my own money, for my own use for development and testing. I've had no contact with the Radxa or Rockchip folks at all here, I'm just so happy with this machine that I've felt the need to shout about it! :-)
Here's some pictures...
It (was) that time of year again - last weekend we hosted a bunch of nice people at our place in Cambridge for the annual Debian UK OMGWTFBBQ!
Lots of friends, lots of good food and drink. Of course lots of geeky discussions about Debian, networking, random computer languages and... screws? And of course some card games to keep us laughing into each night!
Many thanks to a number of awesome friendly people for again sponsoring the important refreshments for the weekend. It's hungry/thirsty work celebrating like this!
Warning: If you're not into meat, you might want to skip the rest of this...
This year, I turned 50. Wow. Lots of friends and family turned up to help me celebrate, with a BBQ (of course!). I was very grateful for a lovely set of gifts from those awesome people, and I have a number of driving experiences to book in the next year or so. I'm going to have so much fun driving silly cars on and off road!
However, the most surprising gift was something totally different - a full-day course of hands-on pork butchery. I was utterly bemused - I've never considered doing anything like this at all, and I'd certainly never talked to friends about anything like it either. I was shocked, but in a good way!
So, two weekends back Jo and I went over to Empire Farm in Somerset. We stayed nearby so we could be ready on-site early on Sunday morning, and then we joined three other people doing the course. Jo was there to observe, i.e. to watch and take (lots of!) pictures.
I can genuinely say that this was the most fun surprise gift I've ever received! David Coldman, the master butcher working with us, has been in the industry for many years. He was an excellent teacher, showing us everything we needed to know and being very patient with us when we needed it. It was great to hear his philosophy too - he only uses the very best locally-sourced meat and focuses on quality over quantity. He showed us all the different cuts of pork that a butcher will make, and we were encouraged to take everything home - no waste here!
At the beginning of the day, we each started with half a pig. Over the next several hours, we steadily worked our way through a series of cuts with knife and saw, making the remaining pig smaller and smaller as we went.
We finished the day with three sets of meat. First, a stack of vacuum-packed joints, chops and steaks ready for cooking and eating at home. Second: a box of off-cuts that we minced and made into sausages at the end of the day. Finally: a bag of skin and bones. Our friend's dog got some of the bones, and Jo turned a lot of the skin into crackling that we shared with friends at the OMGWTFBBQ the next weekend.
This was an amazing day. Massive thanks to my good friend Chris Walker for suggesting this gift. As I told David on the day: this was the most fun surprise gift I've ever received. Good hands-on teaching in a new craft is an incredible thing to experience, and I can't recommend this course highly enough.