Page MenuHomeFreeBSD

arm64: Add per SoC family kernel config
ClosedPublic

Authored by manu on May 26 2021, 4:27 PM.
Tags
None
Referenced Files
Unknown Object (File)
Sat, Dec 14, 12:19 PM
Unknown Object (File)
Tue, Dec 10, 12:05 PM
Unknown Object (File)
Nov 29 2024, 7:40 AM
Unknown Object (File)
Nov 25 2024, 10:46 AM
Unknown Object (File)
Nov 24 2024, 6:02 AM
Unknown Object (File)
Nov 24 2024, 5:53 AM
Unknown Object (File)
Nov 21 2024, 4:24 AM
Unknown Object (File)
Nov 21 2024, 2:38 AM

Details

Summary

There is multiple reason for this :

  • This makes it easier to see which driver is needed for each SoC
  • This makes it easier to create a custom config for one SoC
  • This really reduce boot time (which some people might want)

Some explaination about the files :

  • std.arm64 contains all standard kernel option
  • std.dev contains all the standard kernel devices
  • std.<soc> contains all drivers needed to boot on this SoC family
  • <SOC> includes std.arm64, std.dev and std.<soc>
  • GENERIC includes std.arm64, std.dev and all std.<soc>

Sponsored by: Diablotin Systems
MFC After: 2 months

Test Plan

Tested on :
sopine: 5.9s ALLWINNER 9.9s GENERIC
rockpro64: 11.9s ROCKCHIP 27.0s GENERIC
rpi4: 5.3s BROADCOM 11.1s GENERIC

Time is from the <BOOT> line to the NFS ROOT one

Diff Detail

Lint
Lint Skipped
Unit
Tests Skipped

Event Timeline

manu requested review of this revision.May 26 2021, 4:27 PM
manu edited the test plan for this revision. (Show Details)

is AL a common identifier?

In this model, if someone adds a new device to amd64 GENERIC, what process should they follow for updating arm64?

sys/arm64/conf/AL
8

old URL

sys/arm64/conf/AMD
3

AMD what?

sys/arm64/conf/CAVIUM
3

Is this ThunderX?

is AL a common identifier?

No idea, the real name is Annapurna Labs

In this model, if someone adds a new device to amd64 GENERIC, what process should they follow for updating arm64?

Just adding it in the std.<soc> where it make sense.
I did split that way mostly because of config(8) but maybe I can do a std.dev with all the common device and include that in GENERIC and in the <SOC> kernconf, I'll try.

There seems to be a lot of duplication for drivers with a standard bus attach like PCI or USB

sys/arm64/conf/std.broadcom
74

There is a lot of duplication for nic drivers

105

Ditto usb

maybe I can do a std.dev with all the common device and include that in GENERIC

I wonder if we could go further and share some #include between arm64 and amd64?

maybe I can do a std.dev with all the common device and include that in GENERIC

I wonder if we could go further and share some #include between arm64 and amd64?

I don't think we should. old config makes this harder to make work than just keeping two files in sync. I also don't think we should add a device to GENERIC unless someone has tested it on that architecture or the driver is simple.

Add std.dev
Fix links to use new links

manu marked 2 inline comments as done.

This will make it harder to keep amd64 and arm64 GENERIC in sync; if we're going to do this we should do similar (e.g. introducing std.dev) on amd64

This will make it harder to keep amd64 and arm64 GENERIC in sync; if we're going to do this we should do similar (e.g. introducing std.dev) on amd64

Why amd64 and arm64 should be in sync ?
arm64 is much more than one big-somewhat-compatible-hardware.

arm64 is much more than one big-somewhat-compatible-hardware.

For GENERIC though it ought to be, Ampere eMAG and Altra are basically equivalent to a high end x86 server with respect to add-in cards that may be installed, etc.

arm64 is much more than one big-somewhat-compatible-hardware.

For GENERIC though it ought to be, Ampere eMAG and Altra are basically equivalent to a high end x86 server with respect to add-in cards that may be installed, etc.

And this is why GENERIC here isn't changed.
I'm not proposing that we stop having a GENERIC kernel by default in our images just that we make it easier for embedded users to have write or use a kernel config.
We could even have a GENERIC-ACPI to use on big/medium iron and EC2.

manu added a subscriber: mikael.

Fix Nvidia NICs according to what @mikael sent me.

Add fan53555 to std.rockchip

Given the limitations in config(8), I think this is likely as good as it gets.
I have a minor reservation on the name std.dev, but it doesn't matter since amd64 doesn't use it, and even were amd64 to adopt it, it could easily be sorted out then.

This revision is now accepted and ready to land.Jul 14 2021, 7:21 PM
sys/arm64/conf/AMD
19

One last minute concern: All the new config files will be built in UNIVERSE since they aren't marked to not do that. Is that what you really want?

Add NO_UNIVERSE for all SoC specific configs.

This revision now requires review to proceed.Jul 15 2021, 2:34 PM
In D30474#701728, @imp wrote:

Given the limitations in config(8), I think this is likely as good as it gets.
I have a minor reservation on the name std.dev, but it doesn't matter since amd64 doesn't use it, and even were amd64 to adopt it, it could easily be sorted out then.

That's easily changeable now, what name do you suggest ? std.common maybe ?

In D30474#702001, @manu wrote:
In D30474#701728, @imp wrote:

Given the limitations in config(8), I think this is likely as good as it gets.
I have a minor reservation on the name std.dev, but it doesn't matter since amd64 doesn't use it, and even were amd64 to adopt it, it could easily be sorted out then.

That's easily changeable now, what name do you suggest ? std.common maybe ?

Thinking about it, let's not change. I don't have any great, better names. config++ will change a lot of things, so I'm thinking that we should defer further tinkering until then.

This revision is now accepted and ready to land.Jul 15 2021, 3:07 PM
This revision was automatically updated to reflect the committed changes.

I see problems with two of the files here:

  • std.dev: These are commonly seen devices, but several of these are not universal nor are they required. In fact I'm wonder whether nvme support is common. Looking at the file, this likely should have been broken into several pieces
  • std.virt: In present form, this isn't a virtual SoC configuration, but a VMware SoC configuration. This should have been "std.vmware".

I'm trying to figure out the interaction with D30950. For a Xen DomU, the list of drivers is very short. I'm wondering whether VIRT should include "std.dev" at all.

I see problems with two of the files here:

  • std.dev: These are commonly seen devices, but several of these are not universal nor are they required. In fact I'm wonder whether nvme support is common. Looking at the file, this likely should have been broken into several pieces

Maybe nvme isn't that common in SBC world but splitting this files into two or more files would have been too confusing IMHO.

  • std.virt: In present form, this isn't a virtual SoC configuration, but a VMware SoC configuration. This should have been "std.vmware".

The only VMware stuff are pvscsi and vmx, all virto device are also there and they are used in qemu and kvm machine.

I'm trying to figure out how the interaction with D30950. For a Xen DomU, the list of drivers is very short. I'm wondering whether VIRT should include "std.dev" at all.

Of course it should include it as std.dev have pci, i2c, ahci etc ... all thing used in a VM.

In D30474#702742, @manu wrote:

I'm trying to figure out how the interaction with D30950. For a Xen DomU, the list of drivers is very short. I'm wondering whether VIRT should include "std.dev" at all.

Of course it should include it as std.dev have pci, i2c, ahci etc ... all thing used in a VM.

I've got plans for a Xen DomU which only needs the Xen block device, Xen network and Xen console. No other UARTs, storage devices or network will be present. I could see other VM systems emulating rather more or different hardware, but the included list appears rather generous. Do other hypervisors emulate GPIOs or I2C to guests?

In D30474#702742, @manu wrote:

I'm trying to figure out how the interaction with D30950. For a Xen DomU, the list of drivers is very short. I'm wondering whether VIRT should include "std.dev" at all.

Of course it should include it as std.dev have pci, i2c, ahci etc ... all thing used in a VM.

I've got plans for a Xen DomU which only needs the Xen block device, Xen network and Xen console. No other UARTs, storage devices or network will be present. I could see other VM systems emulating rather more or different hardware, but the included list appears rather generous. Do other hypervisors emulate GPIOs or I2C to guests?

I guess they can yes.
If/when xen/arm64 becomes a thing we could think about adding a XEN kernel configuration file.

In D30474#702742, @manu wrote:

I see problems with two of the files here:

  • std.dev: These are commonly seen devices, but several of these are not universal nor are they required. In fact I'm wonder whether nvme support is common. Looking at the file, this likely should have been broken into several pieces

Maybe nvme isn't that common in SBC world but splitting this files into two or more files would have been too confusing IMHO.

Drat, should have responded to this bit. This seems to argue nvme should be included in a "std.dev_server" file which should be for that class of device. Then have "std.dev_embedded" for lower end.

I'm unsure what I would do about VM systems since those can often be very few devices, but a few very high-end features (VM systems are much more likely to include hotplug hardware, since they simply simulate it). As noted I'm unsure how often gpio, i2c or other interesting devices are likely to be passed to VMs. Most VMs will have some flavor of uart, some storage and a nic; anything not on that list might not be appropriate for "std.dev" or else "std.dev" shouldn't be included by VIRT.

After a look less than one-third of devices didn't include a distinct ethernet driver, so "em" and "ix" don't seem very common. If you include the one which references "iflib", only just over a third need that.

There will always be a balance between efficiency and convenience. These try to strike a reasonable one, but may waste a small amount of memory on devices that aren't present. I'm inclined to keep things easy to maintain over having better match to the exact hardware. People that need to shave the extra bites still can have a custom kernel that starts with one of these configs and then removes devices....

In D30474#702763, @imp wrote:

There will always be a balance between efficiency and convenience. These try to strike a reasonable one, but may waste a small amount of memory on devices that aren't present. I'm inclined to keep things easy to maintain over having better match to the exact hardware. People that need to shave the extra bites still can have a custom kernel that starts with one of these configs and then removes devices....

Exactly.
If the point is "this doesn't fit my need 100%" I agree on that and there isn't much we can do.
Also note that drivers that attach on pci bus aren't probed if you don't have a pci driver and even if they take a bit of space in the resulting kernel file it's not much.