During last years Arm released some specifications in an effort to help organise standardise their market. We got Server Base System Architecture (SBSA), then Base System Architecture (BSA) and finally PC Base System Architecture (PC-BSA) one. Both SBSA and PC-BSA extend BSA (SBSA 7.0 was rebased on top of BSA).
Did these documents changed the market? That’s a discussion for another time.
Each of mentioned specifications comes with a compliance checklist referencing document sections such as B_PE_11 which states:
Each PE must implement a minimum of six breakpoints, two of which must be able to match virtual address, contextID or VMID.
To visualize these checklists I created the Arm BSA/PC-BSA/SBSA checklist page where this data is presented as a table. How it is generated will be explained below.
Following the checklist manually is a difficult task so Arm released Architecture Compliance Suites (ACS in short) for BSA and SBSA specifications which run tests and tell whether your hardware is compliant. PC-BSA so far does not have own ACS yet.
NOTE: I ran compliance suites only on UEFI+ACPI systems so do not know how BSA ACS behaves on DeviceTree based ones.
At the start, ACPI tables are parsed and hardware details are checked, such as GIC, SMMU, PCIe etc. You then get a summary of the information:
Creating Platform Information Tables
PE_INFO: Number of PE detected : 4
GIC INFO: GIC version : v3
GIC_INFO: Number of GICD : 1
GIC_INFO: Number of GICR RD : 1
GIC_INFO: Number of ITS : 1
TIMER_INFO: System Counter frequency : 1000 MHz
TIMER_INFO: Number of system timers : 0
WATCHDOG_INFO: Number of Watchdogs : 1
PCIE_INFO: Number of ECAM regions : 1
PCIE_INFO: Number of BDFs found : 4
PCIE_INFO: Number of RCiEP : 2
PCIE_INFO: Number of RCEC : 0
PCIE_INFO: Number of EP : 1
PCIE_INFO: Number of RP : 1
PCIE_INFO: Number of iEP_EP : 0
PCIE_INFO: Number of iEP_RP : 0
PCIE_INFO: Number of UP of switch : 0
PCIE_INFO: Number of DP of switch : 0
PCIE_INFO: Number of PCI/PCIe Bridge : 0
PCIE_INFO: Number of PCIe/PCI Bridge : 0
SMMU_INFO: Number of SMMU CTRL : 1
SMMU_INFO: SMMU index 00 version : v3.1
Peripheral: Num of USB controllers : 1
Peripheral: Num of SATA controllers : 1
Peripheral: Num of UART controllers : 1
The format may differ a bit between both BSA ACS and SBSA ACS but data is there.
Then tests run in groups (PE, Memory map, GIC, SMMU, PCIe etc.). Each test can return one of 3 values: PASS, FAIL or SKIPPED (usually when the requirements to run the test are not met):
28 : Check Fine Grain Trap Support
Failed on PE - 0
S_L7PE_01
Checkpoint -- 1 : Result: FAIL
29 : Check for ECV support
Failed on PE - 0
S_L7PE_02
Checkpoint -- 1 : Result: FAIL
30 : Check for AMU Support
Failed on PE - 0
S_L7PE_03
Checkpoint -- 1 : Result: FAIL
31 : Checks ASIMD Int8 matrix multiplc : Result: PASS
32 : Check for BFLOAT16 extension : Result: PASS
33 : Check PAuth2, FPAC & FPACCOMBINE : Result: PASS
34 : Check for SVE Int8 matrix multiplc : Result: PASS
35 : Check for data gathering hint : Result: PASS
36 : Check WFE Fine tune delay feature
Recommened WFE fine-tuning delay feature not implemented
S_L7PE_09
Checkpoint -- 2 : Result: SKIPPED
As you can see when test does not pass you get a tag (like S_L7PE_01) pointing to relevant specification.
Each ACS can be run in verbose mode by adding “-v 1” to the command line. Amount of detail is increased to the level that logging output is highly recommended. You can compare my older logs:
In my sbsa-ref-status repository I have scripts that gather and parse data for QEMU’s SBSA Reference Platform (sbsa-ref in short). The result is set of YAML files (status-bsa.yml and status-sbsa.yml) that contain information how the tests went:
31:
level: '7'
status:
cortex-a57: FAIL
cortex-a72: FAIL
max: PASS
neoverse-n1: FAIL
neoverse-n2: PASS
neoverse-v1: PASS
tags: S_L7PE_04
title: Checks ASIMD Int8 matrix multiplc
As you see test for S_L7PE_04 passed on some cpu core models and failed on old ones. This pattern continues for other tests and tags. Several entries have only SKIPPED values because the hardware lacked something required to run them.
Those scripts should work with results from other hardware.
Some tags do not have coverage in the ACS. Some tests check for things that are not present in checklists present in the specifications. In these cases it is important to look into ACS documentation:
Both of these expand on the checklists from the specifications with additional information. Which SBSA level test was written for, is it tested in the UEFI or Linux environment, is additional exerciser card required etc.
I used both to expand status-(s)bsa.yml files to ensure all tested entries are listed.
To generate a page with checklist table I use another YAML file: xbsa-checklist.yml. This file maps tags from the specifications into groups and subgroups and keeps information on whether tag required for BSA v1/v2, PC-BSA or SBSA levels. I wrote it by hand and it needs to be updated with every specification update.
Next generate-xbsa.py needs to be run which generates HTML page with the table.
Changes to ACS may alter the test numbers or tags used. I reported several issues against both BSA ACS and SBSA ACS about it and they were handled. At this moment all tests have tags assigned.
If the generated page lists “ACS only tests” entries it means one of status-*.yml file needs to be updated because some unhandled tag was used. Or ACS change had error in tag name.
Tests may be renamed — in such cases status files will get new ones. When test numbers change (which is rare) then manual checking may be required.
According to Arm developers (BSA ACS issue #395) there will be PC-BSA ACS in first week of December. Once it is released, a new parser will need to be written (like one for BSA ACS or for SBSA ACS) and the page generator updated to use this information.
This page is one of projects I plan to abandon in 2025. It was a useful tool for checking what needed to be done for the SBSA Reference Platform, either on virtual hardware (QEMU) or in firmware (TF-A + EDK2).
None of Arm hardware I use at home is SBSA compliant. Running BSA ACS on some of those causes them to hang. I do not expect this situation to change in the coming months.
The current page will remain online but I do not plan to invest time in updating it.
Some time ago I was informed that Red Hat will not prolong membership for Linaro DataCentre Group. Which for me means end of my 2nd adventure with Linaro.
I was in Linaro from April 2010 to end of May 2013 and then from April 2016 to end of current month. So two times.
But I was leaving Linaro twice in past. First in October 2012 when someone decided that it is not the time yet for me to go. And then in May 2013 when I finally left.
Those eight and half years of Linaro work were a good time.
First we were doing OpenStack. Went from “needs hacks” to be ready out of the box for use. During 6 years I did hundreds of patches and countless reviews.
In meantime cloud providers started providing Arm instances and pressure to keep OpenStack working became smaller. Why maintain whole infrastructure when you can rent virtual one?
Part of that jobs was extending CirrOS images to behave properly on UEFI systems (AArch64 and x86-64). Defined CI jobs, handled migration to Github and helped with several releases.
There was some work done on distribution images as well.
Then moved to SBSA Reference Platform. This was interesting in the beginning. I had a feeling of being more of a manager than developer. Had to collect ideas from everyone who worked on it and get something working and being acceptable by upstream.
This ended as internal versioning of platform and then we started adding more
features and upgrading platform whenever something interesting landed in QEMU.
Now sbsa-ref
uses Neoverse-N2 cpu by default, can be used with NUMA setup,
have defined cpu topology and more.
Learnt firmware stuff (TF-A, EDK2), reviewed countless patches etc.
Too bad that during those years I was not able to buy any SBSA compliant hardware below 2000 EUR :(
One of my tasks during whole Linaro time was handling Continuous Integration. Defining new jobs, taking care of old ones, maintaining machines we used as Jenkins runners.
This took me into interesting places. There was a lot of Python. I even managed
to be involved in manylinux
images used to build Python packages.
I am moving back to Red Hat. There are some positions open where I may fit so have to take a look and choose.
About two years ago I got an idea of gathering information about AArch64 SoCs. Mostly to have a place to show how many of them are still using outdated v8.0 cpu cores.
During those years many things changed. And there were funny moments too.
Current stats of table are:
ISA level | Amount of SoC entries |
---|---|
v8.0 | 89 |
v8.1 | 1 |
v8.2 | 58 |
v8.4 | 1 |
v8.5 | 6 |
v8.6 | 8 |
v9.0 | 24 |
v9.2 | 8 |
Architecture updates are present on a market. More and more SoC vendors go for newer designs instead of staying in the past. Most of those cases are mobile phones. Cloud systems also moved to the new designs as we have Arm Neoverse-V2 based instances available at several places.
As most of SoC vendors switched to use Arm designs I decided to create a table which would show some more information about them. And created AArch64 cpu core information table.
It lists all Arm designs (Cortex-A/X, Neoverse) with direct links to their TRM documentation, ID numbers, memory sizes, supported page sizes, SVE vector length and level on AArch32 support.
Code is open enough to handle also designs from other SoC vendors but I have not seen such documentation being present in public.
Anyone can submit a new entry as a new issue on Github. And most people used that way. It is also recommended way.
There is a column about AArch32 support. Most of v8.x SoCs supports running 32-bit binaries, some support booting 32-bit kernels. v9.x ones finally get rid of any kind of 32-bit support.
At some point I added links to Arm core TRMs (Technical Reference Manuals). Then added information when SoC was announced so it can be seen how new/old it is.
There are funny moments sometimes after several updates. In January 2023 I added Alibaba Yitian 710 SoC to the table. It was the first Neoverse-N2 system there.
About month later some random person wrote to me on IRC:
I’m glad you put the Yitian 710 in your table, now I can point people at it, and tell them to look at that for what processor its using.
Other interesting example: Longhorn published information for NVIDIA Grace SoC when those systems were quite rare.
Or Jeff Geerling started reviewing AmpereOne system and published data for Ampere1A (AC04) cpu core which was not yet in a table. There was public information that AC04 exists but no cpuinfo data for it.
When I created this table I have not thought that it will get that popular. Now it feels like when device with a new SoC appears on a market someone sends me data for a new entry.
I automated most of work related to maintaining the table so project will stay for as long as people will send me data for it.
More and more things move to a cloud.
Times when people used traditional servers have passed.
Who did not heard such sentences in previous years? So time for me to move to the cloud as well.
Powered by Arm cloud.
Call me old fashioned but I like to self host my services. At least some of them (web, private git repos, mail).
But a few months ago I got an e-mail from OVH:
As part of our project to make our datacentres hyper-resilient, we are starting the transformation of the RBX1 datacentre. Our aim is to modernise our infrastructure, in particular, the building where we house servers hosting all our cloud services. Despite our utmost efforts, this modernisation cannot be carried out without affecting your services.
We are writing to inform you that, as a result, the following Bare Metal servers will be closing down on 31 December 2024
I checked their prices for newer machines and decided that it is a bit too much. So it was time to move somewhere else.
I looked at options and then realised that someone mentioned ‘Free tier’ on Oracle cloud:
For free. I dusted my old account, checked is it “Pay As Yo Go” one and started playing with it.
First try was “let use a quarter”: one cpu core, 6 GB of memory and 50 GB of storage. Turned out that is quite enough to run several websites with small traffic. After some tweaks here and there machine got some testing and works.
This blog is hosted on Oracle cloud for a few weeks now and so far no one complained.
My previous servers were handling my mail in standard way. Postfix as SMTP, Dovecot for IMAP. With added stuff on top like Amavis, ClamAV, SpamAssasin, OpenDKIM etc. etc. And Roundcube for webmail if someone needs.
But my system administrator never were top of the shelf ones. It worked, I tweaked it from time to time etc. So this time decided to go with some “all-in-one” solution.
Went with mailcow - a bunch of mail services running as containers, handling things and providing me with some WebUI for configuration.
Added accounts, aliases, setup IMAP sync jobs so all users had their mail present from previous server. Handled DNS changes and server went online.
In meantime I checked for services I run:
My installation of Forgejo remembers Gitea times. So I took some time, cleaned configuration to get rid of gitea names from it. Now it is running with all old repositories.
Factorio multiplayer went to trash. There is no Linux/arm64 binary available.
Discord bot was Python. Migrated fine.
Other shit? Went through it, killed some, migrated some to other places.
How well will it serve my needs? Time will show. My first server had 4 GB ram and dual core Atom cpu (2 cores, 4 threads). And rotating rust as storage. The last one was i5-750 (4 cores) with 16 GB of memory and rotating rust.
So current duo of 2 cores with 8 GB each should work for some time.
Again this year, Arm offered to host us for a mini-debconf in Cambridge. Roughly 60 people turned up on 10-13 October to the Arm campus, where they made us really welcome. They even had some Debian-themed treats made to spoil us!
For the first two days, we had a "mini-debcamp" with disparate group of people working on all sorts of things: Arm support, live images, browser stuff, package uploads, etc. And (as is traditional) lots of people doing last-minute work to prepare slides for their talks.
Saturday and Sunday were two days devoted to more traditional conference sessions. Our talks covered a typical range of Debian subjects: a DPL "Bits" talk, an update from the Release Team, live images. We also had some wider topics: handling your own data, what to look for in the upcoming Post-Quantum Crypto world, and even me talking about the ups and downs of Secure Boot. Plus a random set of lightning talks too! :-)
Lots of volunteers from the DebConf video team were on hand too (both on-site and remotely!), so our talks were both streamed live and recorded for posterity - see the links from the individual talk pages in the wiki, or http://meetings-archive.debian.net/pub/debian-meetings/2024/MiniDebConf-Cambridge/ for the full set if you'd like to see more.
Again, the mini-conf went well and feedback from attendees was very positive. Thanks to all our helpers, and of course to our sponsor: Arm for providing the venue and infrastructure for the event, and all the food and drink too!
Photo credits: Andy Simpkins, Mark Brown, Jonathan Wiltshire. Thanks!
It's been a while since I've posted about arm64 hardware. The last machine I spent my own money on was a SolidRun Macchiatobin, about 7 years ago. It's a small (mini-ITX) board with a 4-core arm64 SoC (4 * Cortex-A72) on it, along with things like a DIMM socket for memory, lots of networking, 3 SATA disk interfaces.
The Macchiatobin was a nice machine compared to many earlier systems, but it took quite a bit of effort to get it working to my liking. I replaced the on-board U-Boot firmware binary with an EDK2 build, and that helped. After a few iterations we got a new build including graphical output on a PCIe graphics card. Now it worked much more like a "normal" x86 computer.
I still have that machine running at home, and it's been a reasonably reliable little build machine for arm development and testing. It's starting to show its age, though - the onboard USB ports no longer work, and so it's no longer useful for doing things like installation testing. :-/
So...
I was involved in a conversation in the #debian-arm IRC channel a few weeks ago, and diederik suggested the Radxa Rock 5 ITX. It's another mini-ITX board, this time using a Rockchip RK3588 CPU. Things have moved on - the CPU is now an 8-core big.LITTLE config: 4*Cortex A76 and 4*Cortex A55. The board has NVMe on-board, 4*SATA, built-in Mali graphics from the CPU, soldered-on memory. Just about everything you need on an SBC for a small low-power desktop, a NAS or whatever. And for about half the price I paid for the Macchiatobin. I hit "buy" on one of the listed websites. :-)
A few days ago, the new board landed. I picked the version with 24GB of RAM and bought the matching heatsink and fan. I set it up in an existing case borrowed from another old machine and tried the Radxa "Debian" build. All looked OK, but I clearly wasn't going to stay with that. Onwards to running a native Debian setup!
I installed an EDK2 build from https://github.com/edk2-porting/edk2-rk3588 onto the onboard SPI flash, then rebooted with a Debian 12.7 (Bookworm) arm64 installer image on a USB stick. How much trouble could this be?
I was shocked! It Just Worked (TM)
I'm running a standard Debian arm64 system. The graphical installer ran just fine. I installed onto the NVMe, adding an Xfce desktop for some simple tests. Everything Just Worked. After many years of fighting with a range of different arm machines (from simple SBCs to desktops and servers), this was without doubt the most straightforward setup I've ever done. Wow!
It's possible to go and spend a lot of money on an Ampere machine, and I've seen them work well too. But for a hobbyist user (or even a smaller business), the Rock 5 ITX is a lovely option. Total cost to me for the board with shipping fees, import duty, etc. was just over £240. That's great value, and I can wholeheartedly recommend this board!
The two things that are missing compared to the Macchiatobin? This is soldered-on memory (but hey, 24G is plenty for me!) It also doesn't have a PCIe slot, but it has sufficient onboard network, video and storage interfaces that I think it will cover most people's needs.
Where's the catch? It seems these are very popular right now, so it can be difficult to find these machines in stock online.
FTAOD, I should also point out: I bought this machine entirely with my own money, for my own use for development and testing. I've had no contact with the Radxa or Rockchip folks at all here, I'm just so happy with this machine that I've felt the need to shout about it! :-)
Here's some pictures...
It (was) that time of year again - last weekend we hosted a bunch of nice people at our place in Cambridge for the annual Debian UK OMGWTFBBQ!
Lots of friends, lots of good food and drink. Of course lots of geeky discussions about Debian, networking, random computer languages and... screws? And of course some card games to keep us laughing into each night!
Many thanks to a number of awesome friendly people for again sponsoring the important refreshments for the weekend. It's hungry/thirsty work celebrating like this!
Warning: If you're not into meat, you might want to skip the rest of this...
This year, I turned 50. Wow. Lots of friends and family turned up to help me celebrate, with a BBQ (of course!). I was very grateful for a lovely set of gifts from those awesome people, and I have a number of driving experiences to book in the next year or so. I'm going to have so much fun driving silly cars on and off road!
However, the most surprising gift was something totally different - a full-day course of hands-on pork butchery. I was utterly bemused - I've never considered doing anything like this at all, and I'd certainly never talked to friends about anything like it either. I was shocked, but in a good way!
So, two weekends back Jo and I went over to Empire Farm in Somerset. We stayed nearby so we could be ready on-site early on Sunday morning, and then we joined three other people doing the course. Jo was there to observe, i.e. to watch and take (lots of!) pictures.
I can genuinely say that this was the most fun surprise gift I've ever received! David Coldman, the master butcher working with us, has been in the industry for many years. He was an excellent teacher, showing us everything we needed to know and being very patient with us when we needed it. It was great to hear his philosophy too - he only uses the very best locally-sourced meat and focuses on quality over quantity. He showed us all the different cuts of pork that a butcher will make, and we were encouraged to take everything home - no waste here!
At the beginning of the day, we each started with half a pig. Over the next several hours, we steadily worked our way through a series of cuts with knife and saw, making the remaining pig smaller and smaller as we went.
We finished the day with three sets of meat. First, a stack of vacuum-packed joints, chops and steaks ready for cooking and eating at home. Second: a box of off-cuts that we minced and made into sausages at the end of the day. Finally: a bag of skin and bones. Our friend's dog got some of the bones, and Jo turned a lot of the skin into crackling that we shared with friends at the OMGWTFBBQ the next weekend.
This was an amazing day. Massive thanks to my good friend Chris Walker for suggesting this gift. As I told David on the day: this was the most fun surprise gift I've ever received. Good hands-on teaching in a new craft is an incredible thing to experience, and I can't recommend this course highly enough.
There are discussions in development circles about Arm powered laptops since forever. But most of time they do not mention “normal” users. Like your parents, spouses, kids who are not developers. People who turn computer on (cold boot or from suspend does not matter) and expect them to “just work”.
My teenage daughter is one of them. Her current laptop is one of Thinkpad models, previous one was Thinkpad as well. Fedora Linux as operating system serves her needs just fine. But despite my 20 years of work with Arm architecture I am unable to get Arm based laptop for her.
There are several Arm powered laptops on a market:
And all of them have issues when it comes to using Fedora Linux on them.
Thanks to Asahi Linux team we can run Fedora Linux on M1/M2 based Macbooks. Which means second hand market as Apple does not sell those models any more (unless 8GB of ram is enough for you).
There are many things which are not supported:
So you pay for hardware and have features which you cannot use. I use Macbook Pro 2021 (with M1 Pro cpu) for local development and stopped checking how work goes.
Qualcomm managed to convince Microsoft to not offer licenses for other vendors which means all we can have are Snapdragon based laptops. Which may work nice under MS Windows but if you want to use Linux then “good luck” is all you can get from me.
Some things work, some do not. I was told that Thinkpad x13s is one of best supported models. Johan Hovold has a Thinkpad x13s status page which lists what works and what needs to be done to have some kind of working laptop.
Definitely not a system for daily use for normal Linux user.
Laptop to run web browser and Android apps. If this is all you need then go for it. But avoid if you are “normal” user and want to run Linux.
Finding how to enable running anything other than ChromeOS may involve digging through Internet pages, finding how to override ‘write protection’ etc.
Just no. Also Arm ones are usually ram limited.
Those are systems for developers only. Normal users should avoid using them as those systems require someone who knows how to prepare them to work at all.
Find/build proper firmware, put it properly in device (SPI Flash or storage media), keeping things up-to-date may end with partially not working device etc.
For developers those are ‘issues’ to workaround/solve but for normal users it may be ‘update went in background and now all I have is black screen’.
And like with Chromebooks you may be limited by ram size (Pinebook Pro has only 4GB ram).
If you are a normal user who wants to run Linux on a laptop then maybe stay away from Arm powered ones. Leave them for developers and check once/twice per year to see how situation looks.
At work I spend most of time on SBSA Reference Platform. Especially in firmware part (Arm Trusted Firmware also known as TF-A and Tianocore EDK2 also known as UEFI). However, for some time, I have felt the need to experiment with some UEFI-related task on existing hardware.
I first searched for “affordable” SystemReady SR system. But options were either Ampere Altra or NVIDIA Grace, both prices at 3000 EUR or more.
So I looked at the budget market and bought a FriendlyELEC NanoPC-T6 SBC.
The FriendlyELEC NanoPC-T6 is a SBC (Seriously Bad Computer) based on Rockchip RK3588 SoC. It has some interesting on-board features:
It comes with metal case which works also as a heatsink.
As you know I expect a good “out of the box” experience. And NanoPC-T6 was like any other SBC I used in the past — unpleasant, horrible and frustrating.
The device came with a fork of U-Boot 2017.09, configured in such terrible way that it was incapable of booting any standard distro images I tried. I managed to boot the pre-installed Android 12 on the eMMC but quickly rebooted to avoid dealing with it.
I managed to boot Debian ‘testing’ manually but there was no networking available under 6.9.x kernel.
I Then moved on to other things as my schedule was quite busy.
This week I reserved some time to get NanoPC-T6 running properly. I downloaded a Rockchip tool called “upgrade_tool” and used it to flash a UEFI image from the EDK2-RK3588 project.
Experience was much, much better. The firmware was now capable of booting distro images, allowed me to choose between ACPI or DeviceTree for hardware description and had proper EFI Shell — almost like a well-developed systems.
I went with ACPI mode and booted directly to Fedora ‘rawhide’ system stored on a USB drive. Linux 6.11-rc booted, found devices plugged into USB 3 ports, recognized both network interfaces (Realtek 8125 ones) and the NVME drive as well. There was video output on the HDMI screen (in a hardcoded 1080p resolution).
I then copied the system from the USB3 drive to the NVME, set the proper boot order and enjoyed a nicely working system.
But aren’t Seriously Bad Computers (SBCs) expected to run with DeviceTree? ACPI is for MS Windows, not for Linux or *BSD systems, right?
So I decided to boot into DT land. It took me a while as I had to remind myself how it works and ensure that UEFI firmware will use the 6.11-rc DTB instead of one for 5.10-rk or 6.1-rk vendor kernels.
Finally it booted — or rather, it “kind of” booted…
No USB, no PCIe == no NVME == boot into emergency mode because the root filesystem is not present…
What will future bring? I am going to find out. I have ordered a Wi-Fi card for m.2 type E slot to see how it performs and I am going to spend some time around this EDK2 fork to make some experiments on real hardware.
Nov 26th saw the release of 4.4.165, 4.9.141, 4.14.84 and 4.19.4
For these LTS kernel versions, results were reported upstream, no regressions were found.
2018-11-26: Rafael Tinoco – bug 4043 – Asked Greg to backport a fix for v4.4, Sasha forwarded to the mm list.
For Android Kernels, regressions were detected.
Issues:
No Others Regressions: 4.4.165 and 4.9.141 on Android 9.
X15: 4.14.84 + O-MR1 – Baselining activity has been particularly effective over the past two weeks, dropping the number of errors from 65 failing tests to 16 as of today. That’s really good progress towards setting a clean baseline.
Bug 4033 Sumit has been looking at the failing CtsBluetoothTestCases android.bluetooth.cts.BluetoothLeScanTest#testBasicBleScan and android.bluetooth.cts.BluetoothLeScanTest.testScanFilter failures.
These tests both pass across all kernels with 8.1. They however fail with both 9.0 and AOSP. Looking at historical AOSP results it appears that failures there started approx in the September timeframe.
Last, successful test builds and test boot to UI with 4.4.165 and 4.9.141 with Android 9) using the newly released clang-r346389 compiler.
Nov 26th saw the release of 4.4.165, 4.9.141, 4.14.84 and 4.19.4
For these LTS kernel versions, results were reported upstream, no regressions were found.
2018-11-26: Rafael Tinoco – bug 4043 – Asked Greg to backport a fix for v4.4, Sasha forwarded to the mm list.
For Android Kernels, regressions were detected.
Issues:
No Others Regressions: 4.4.165 and 4.9.141 on Android 9.
X15: 4.14.84 + O-MR1 – Baselining activity has been particularly effective over the past two weeks, dropping the number of errors from 65 failing tests to 16 as of today. That’s really good progress towards setting a clean baseline.
Bug 4033 Sumit has been looking at the failing CtsBluetoothTestCases android.bluetooth.cts.BluetoothLeScanTest#testBasicBleScan and android.bluetooth.cts.BluetoothLeScanTest.testScanFilter failures.
These tests both pass across all kernels with 8.1. They however fail with both 9.0 and AOSP. Looking at historical AOSP results it appears that failures there started approx in the September timeframe.
Last, successful test builds and test boot to UI with 4.4.165 and 4.9.141 with Android 9) using the newly released clang-r346389 compiler.