I have an old but still working Fujitsu Dynamo 1300 FE magneto-optical drive (aka MDF3130EE). It was purchased brand new and has not worked correctly since the beginning with my Ubuntu 18.04. Obviously, there were no problems with the drive and Windows 7. However, the primary objective was to use the drive with Ubuntu. The problem was that the drive always hung in the middle of the copying process from a HDD to a magneto-optical disk. The partitioning and file system type of a magneto-optical disk didn’t matter. A workaround N1 was found: a Linux I/O scheduler. Only a BFQ scheduler prevented the drive from hanging. See below how to:
a. Let’s create a device rule file
root@hostname:/etc/udev/rules.d# more 41-modrive.rules
SUBSYSTEMS=="scsi", DRIVERS=="sd", ATTRS{model}=="MDF3130EE-4500 ", OWNER="me", GROUP="mygroup", MODE="0640", RUN+="/root/changeIOShed2MO.sh"
b. A file to change I/O scheduler
root@hostname:~# vi changeIOShed2MO.sh
#!/bin/bash
t1=`udevadm info -a -n /dev/sdb | grep 'MDF3130EE-4500' | wc -l`
if [ $t1 == '1' ]
then
echo "bfq" | tee /sys/block/sdb/queue/scheduler
exit
fi
t2=`udevadm info -a -n /dev/sdc | grep 'MDF3130EE-4500' | wc -l`
if [ $t2 == '1' ]
then
echo "bfq" | tee /sys/block/sdc/queue/scheduler
exit
fi
t3=`udevadm info -a -n /dev/sdd | grep 'MDF3130EE-4500' | wc -l`
if [ $t3 == '1' ]
then
echo "bfq" | tee /sys/block/sdd/queue/scheduler
exit
fi
c. Activate all together
root@hostname:~# udevadm trigger
Is hanging fixed? Yes, but not forever. Next round came with a regular Ubuntu 22.0.4 update. No idea what was broken or improved this time but the drive started to hang again. All available I/O schedulers were tried w/o success. So a workaround N2 was found and applied and that was sbp2.use_blk_mq. That is a Linux kernel module parameter for the sbp2 driver that controls whether to use the modern multi-queue block layer (blk-mq). The sbp2 driver is for storage devices connected via the IEEE 1394 (FireWire) interface. The multi-queue block layer (blk-mq) is a modern I/O scheduling framework designed to improve the performance of fast (not my drive’s case actually) storage devices, especially with multi-core CPUs. It does this by:
- Allowing multiple I/O queues to be processed in parallel across different CPU cores.
- Reducing the overhead associated with I/O scheduling, which was a bottleneck in the older single-queue block layer.
By setting sbp2.use_blk_mq to 0 you are instructing the SBP-2 driver not to utilize the multi-queue block layer for better I/O performance. So
root@hostname:~#vi /etc/defaults/grub
...
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pci=noaer sbp2.use_blk_mq=0"
...
:q!
root@hostname:~#update-grub
root@hostname:~#sync
root@hostname:~#reboot
Great! The hanging has been fixed again, but certainly not for the long term.


