Raspberry Pi Zero W


 
I'm thinking the easiest way to do this would be to do dump the 1MB that includes the end of the image. The image file should end with 128KB of zeros.
Code:
dd if=(image file or /dev/sdX of sd card) bs=1M skip=28 count=1 | hexdump -C > image.hex

The standard openwrt-rpi image should end like this:
Code:
0002c1b0  8c 28 35 a8 38 00 00 00  96 4e 38 88 00 01 b2 02  |.(5.8....N8.....|
0002c1c0  f0 07 00 00 e1 35 be 6d  3e 30 0d 8b 02 00 00 00  |.....5.m>0......|
0002c1d0  00 01 59 5a 6c b9 42 00  00 00 00 00 7a c0 42 00  |..YZl.B.....z.B.|
0002c1e0  00 00 00 00 04 80 00 00  00 00 e4 c1 42 00 00 00  |............B...|
0002c1f0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
0004c1f2

311794 bytes (312 kB, 304 KiB) copied, 0.145617 s, 2.1 MB/s

The addresses should be the exact same, so if the * starts before 0x0002c1f0 that means that a shortened image is ending up on the card. That would mean a corrupted squashfs. One could also hexdump the entire first 29MB (no skip, count=29) and diff them to see where changes start. If they start after 0x1c00000+0x0004c1f2 (28MB+offset) that would indicate a complete good image copy.

What I might do is calculate how many 64KB blocks the squashfs is, add 1 to make sure the overlayfs is blanked out, then round up to the next 1MB and make sure all that extra space is filled with zeros in the image file. That might just fix any issues with all microsd image writers?
 
Last edited:
Using bs=4m without conv=sync, I get

Code:
$ sudo dd bs=4m if=Downloads/lede-brcm2708-bcm2708-rpi-squashfs-sdcard.img of=/dev/rdisk2
dd: /dev/rdisk2: Invalid argument
7+1 records in
7+0 records out
29360128 bytes transferred in 3.394625 secs (8649004 bytes/sec)

However, I just noticed this, and I'm not sure why I didn't before, I'm getting an "Invalid argument" argument from dd.

I don't get this error when using the 79mb file

Code:
$ sudo dd bs=4m if=Downloads/openwrt-brcm2708-bcm2710-rpi-3-ext4-sdcard.img of=/dev/rdisk2
19+0 records in
19+0 records out
79691776 bytes transferred in 9.810490 secs (8123119 bytes/sec)

And to add another curve ball to things, if I switch from /dev/rdisk2 to /dev/disk2, the dd process is much slower, but the image boots.

Code:
$ sudo dd bs=4m if=Downloads/lede-brcm2708-bcm2708-rpi-squashfs-sdcard.img of=/dev/disk2
7+1 records in
7+1 records out
29671922 bytes transferred in 19.889310 secs (1491853 bytes/sec)

if I use conv=sync I get:

Code:
$ sudo dd bs=4m if=Downloads/lede-brcm2708-bcm2708-rpi-squashfs-sdcard.img of=/dev/rdisk2 conv=sync
7+1 records in
8+0 records out
33554432 bytes transferred in 3.885881 secs (8634961 bytes/sec)

Notice that 33554432 is a multiple of 4096!
 
Last edited:
AHA! Well now this is all starting to make sense. See that +1 on the records in line? That's because dd doesn't have a full bs-sized block for the last transfer. However, using /dev/rdisk requires that writes come in a multiple of the device block size. Therefore, dd can't write the last chunk and you get 7+0 instead of the 7+1 that is needed to write the image correctly. Using /dev/disk the writes get broken up by the kernel into 4KB chunks before going out to the device and those work, but of course they are slow because each 4KB gets turned into a write the size of the devices eraseblock size (4MB generally) so it takes a lot longer.

The HeaterMeter image is 29671922 bytes which isn't even a multiple of 1KB, much less any larger number. The old image (79691776 bytes) is a multiple of 4MB which is also a multiple of 4K! So I think I just need to make the image a multiple of 4KB so it can be written by dd to an rdisk device without requiring conv=sync?

You are really a hero on this, Steve. Your thorough testing and experimentation appears to be getting to the bottom of this and possibly affecting even non-mac users depending on how they are writing their sd cards. I really appreciate it. I'm going to make a few images that have some different block sizes today and maybe we can see what the cutoff point is (is 4K enough or does it need to be 1M or 4M?).
 
Well we know we need 64KB alignment for the overlayfs wipe, so let's just try an image with 64KB alignment and see if that works without needing conv=sync?
https://heatermeter.com/devel/snapshots/bcm2708/
Code:
de4d9871bd0873f288f0807a417cf3c0  lede-brcm2708-bcm2708-rpi-squashfs-sdcard-64k.img

I spent all morning trying to figure out the right Makefile syntax to just expand and align the squashfs image by just the right amount, but I think I've got it figured out now that I can make it align to any size easily right in the Makefile. If the 64KB doesn't work, then I'll do a few larger numbers. Hopefully this will work so I can clean up the 300 Raspberry Pi and sd cards I have all over my desk.
 
Last edited:
Successful write and boot!

Code:
$ sudo dd bs=4m if=Downloads/lede-brcm2708-bcm2708-rpi-squashfs-sdcard-64k.img of=/dev/rdisk2
7+1 records in
7+1 records out
29622272 bytes transferred in 3.396832 secs (8720558 bytes/sec)
 
We did it! I mean we hopefully did it. Pushing this out as the standard snapshot now and let's see how it goes. Can't be any worse than the old snapshot, right?
 
Good stuff. Here's to hoping that it's solved once and for all.

Nice to hear of a dd story where the outcome was positive :)

Several years ago I was helping one of our DBAs at work to troubleshoot a disk performance issue. I gave him a dd command to run. He mixed up the if= and of= values and ended up nuking a 750GB Oracle data set. He didn't know what went wrong, but instead of stopping and asking, he went and did the same thing to 2 other servers as well. Fun times!
 
haha now that is an epic blunder. Well this computer isn't working any more, let me try it on another. Hrm same thing. Let me do it to one more server!

New snapshot should be up now.
 
Good stuff. Here's to hoping that it's solved once and for all.

Nice to hear of a dd story where the outcome was positive :)

Several years ago I was helping one of our DBAs at work to troubleshoot a disk performance issue. I gave him a dd command to run. He mixed up the if= and of= values and ended up nuking a 750GB Oracle data set. He didn't know what went wrong, but instead of stopping and asking, he went and did the same thing to 2 other servers as well. Fun times!

dd to a cooked database file? Let me guess, reversed /dev/null and the db filename. Nope. NEVER saw that happen before...

Backups and archive/redo logs.
 
Ok I am a little less happy with the changes I've just made but I still can effectively pad and align the rootfs to any size in the build process and not rely on the image generator. I've pushed up a new snapshot which now should be 64KB aligned with both the pre-configured images and the default image.
 
Are there any other reports of the preconfigured firmware or regular firmware files getting stuck in a boot loop now with the latest version, or can I consider this issue resolved?
 
i downloaded the /dl file for the zero today and flashed 2 pi's, "zero" problems
Haaa, I get it! :)

Well that is great news. Thanks to everyone who tested and provided the clues needed to help figure out what I needed to do to make things work. Especially Steve with the dd commands that worked and which didn't. I just gotta make sure all the code is checked in and I think we got ourselves a release? I'll go get the beer!
 
I just DL the Client version of the latest snapshot, worked perfect on the same unit that had a problem with the last snapshot. I think you've got it sussed... Great work as always!
 

 

Back
Top