Giter Site home page Giter Site logo

Comments (157)

007revad avatar 007revad commented on September 22, 2024 2

My E10M20-T1 arrived 30 minutes ago and I now have both 10GbE and NVMe drives working in DSM 7.2-64570 Update 3 :o)

The solution was simple once I realized LAN 5 was missing as well as the NVMe drives.

  1. sudo set_section_key_value /usr/syno/etc/adapter_cards.conf E10M20-T1_sup_nvme DS1821+ yes
  2. sudo set_section_key_value /usr/syno/etc/adapter_cards.conf E10M20-T1_sup_nic DS1821+ yes
  3. Reboot.

If I'd thought of checking that "/usr/syno/etc/adapter_cards.conf" matched "/usr/syno/etc.defaults/adapter_cards.conf" and they both contained DS1821+ in the correct places I could have saved $300

Image_E10M20-T1_working

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024 1

I need to get some information from you. Can you reply with what the following commands return:

synodisk --enum -t cache

cat /sys/block/nvme0n1/device/syno_block_info

cat /sys/block/nvme1n1/device/syno_block_info

cat /sys/block/nvme2n1/device/syno_block_info

cat /sys/block/nvme3n1/device/syno_block_info

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024 1

And 2 more:

for d in /sys/devices/pci0000:00/0000:00:01.2/0000:* ; do echo "$d"; done

Assuming the last line of that command ended in 0000:07:00.0 then run this command:

for d in /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:* ; do echo "$d"; done

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024 1

I have enough to create a model.dtb file for your DS1821+ to make the E10M20-T1 and it's NVMe drives appear in storage manager.

But the result of the last command is a little confusing. Though it doesn't matter for what we're doing.

  1. 0000:08:00.0
  2. 0000:08:02.0 is one of the M.2 slots in the E10M20-T1 for a SATA M.2 drive.
  3. 0000:08:03.0 is one of the M.2 slots in the E10M20-T1 for a SATA M.2 drive.
  4. 0000:08:04.0 is the M.2 slot 2 in the E10M20-T1 for an NVMe drive.
  5. 0000:08:08.0 is the M.2 slot 1 in the E10M20-T1 for an NVMe drive.
  6. 0000:08:0c.0

I don't know what 0000:08:00.0 and 0000:08:0c.0 are for. One of them could be for the 10G in the E10M20-T1.

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024 1

Great!
I don`t mind testing some more for you if you need the info for the future?

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024 1

I don`t mind testing some more for you if you need the info for the future?

I will take you up on that.

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024 1

That's disappointing and unexpected.

It's 9pm here and it's been a busy day. I'll get back to you tomorrow.

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024 1

@RozzNL @zcpnate @cfsnate

I just realised why the edited model.dtb file didn't work. If you have syno_hdd_db scheduled to run run at start-up or shutdown it will replace the edited model.dtb file with the one for 7.2 update 1... which is not what we want.

The syno_hdd_db.sh that I have scheduled to run at boot-up has the check_modeldtb "$c" lines commented out. For the E10M20-T1 you want to change line 1335 from check_modeldtb "$c" to #check_modeldtb "$c"

After editing syno_hdd_db.sh redo the steps in this comment.

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024 1

Will try that later today Dave

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024 1

Yes, DSM does overwrite /run/model.dtb during boot.

Can you restore the backed up model.dtb file, until I create a new model.dtb for you to try.

  1. cp -p /etc.defaults/model.dtb.bak /etc.defaults/model.dtb
  2. cp -pu /etc.defaults/model.dtb.bak /etc/model.dtb
  3. Reboot

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024 1

Do you suggest I wait for NAS to finish this building to 100%.

I would wait until it's finished.

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024 1

Chipped in a bit through PayPal.

Thanks.

Anytime you update DSM you'll need to run syno_hdd_db again. So it's easier to schedule it to run at boot-up.

Synology_enable_M2_volume isn't needed on a DS1821+ if you've run syno_hdd_db.

You will need Synology_M2_volume if you want to use the NVMe drives in the E10M20-T as a volume. This is because DSM won't allow creating a volume on NVMe drives in an M.2 adaptor card (not even for their own Synology branded NVMe drives).

from synology_hdd_db.

MirHekmat avatar MirHekmat commented on September 22, 2024 1

Thanks all worked out now!

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024 1

I think you are in Australia too,

I figured you were also in Australia.

I have a 2nd DS1821+ coming. Do you have any cheaper/good thirdparty 10G card alternate I bought the TP-Link TX401 but I couldnt get that to work. dummy me didn't know back then Synology is very hardware restricted.

Apparently Synology's E10G18-T1 uses the Aquantia AQN107 controller but with custom firmware... so other Aquantia AQN107 controller based 10G cards (like the Asus-XG-C100C) don't work. With DSM 6 you could download the Linux driver source and compile it on your Synology.

https://servicemax.com.au/tips/synology-10gigabit-ethernet-on-the-cheap/
https://www.reddit.com/r/synology/comments/k4a5px/how_i_got_a_generic_cheap_aqc107_card_working_on/

There's comments on the Xpenology forum saying that doesn't work in DSM 7 (but that may just be because they didn't use the latest driver source).

The Xpenology people do have drivers for the DS1621+ (same CPU as DS1821+) up to DSM 7.1.1 but nothing for DSM 7.2 (unless the DSM 7.1.1 driver still works). It seems like a lot work to keep the driver up to date with each DSM update.

I actually have a Asus-XG-C100C in my PC so I could install my E10G18-T1 in my PC and try the Asus-XG-C100C in my DS1821+ to test it. Which seems like a lot work to save $100 AU.

10G cards that do work by just plugging them in are usually 2nd hand 10G SFP cards, or some 10GbE cards, but they only support 10G and 1G (no 2.5G or 5G).

  • Mellanox Connect X2 or X3
  • Intel based cards.

See:
https://www.reddit.com/r/synology/comments/ssjoi6/thirdparty_10g_nic_compatibility_for_ds_1821_only/
https://www.reddit.com/r/synology/comments/kcd3d6/cost_effective_3rd_party_10gbe_nic_for_synology/

from synology_hdd_db.

MirHekmat avatar MirHekmat commented on September 22, 2024 1

Sounds like alot of work, I did read some of those Reddit post as well and it seems like alot of mucking around and since I am not invested in SFP at all, I complete agree not worth saving $100, I think I'll just buy Synology DS1821+ compatible one from Amazon. Also good to know DS1621+ and DS1821+ use the same CPU.

Thank you OZ Fellow!

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

I did use the create m2 volume script again and created 4x single volumes, hope this does not mess up your needed information.

synodisk --enum -t cache
No info returned

cat /sys/block/nvme0n1/device/syno_block_info
pciepath=00:01.2,00.0,04.0,00.0

cat /sys/block/nvme1n1/device/syno_block_info
pciepath=00:01.2,00.0,08.0,00.0

cat /sys/block/nvme2n1/device/syno_block_info
pciepath=00:01.3,00.0

cat /sys/block/nvme3n1/device/syno_block_info
pciepath=00:01.4,00.0

for d in /sys/devices/pci0000:00/0000:00:01.2/0000:* ; do echo "$d"; done
/sys/devices/pci0000:00/0000:00:01.2/0000:00:01.2:pcie01
/sys/devices/pci0000:00/0000:00:01.2/0000:00:01.2:pcie02
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0
Yes indeed returned your assumed info

for d in /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:* ; do echo "$d"; done
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:07:00.0:pcie12
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:00.0
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:02.0
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:03.0
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:0c.0

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

Can you download this zip file:
ds1821+_model_with_e10m20-t1.zip

Then

  1. Unzip it to a directory on the DS1821+
  2. cd to that directory.
  3. chmod 644 model.dtb
  4. cp -p /etc.defaults/model.dtb /etc.defaults/model.dtb.bak
  5. cp -pu model.dtb /etc.defaults/model.dtb
  6. cp -pu model.dtb /etc/model.dtb
  7. Reboot
  8. Check storage manager now shows the E10M20-T1 and it's NVMe drives.

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

Nope, nothing changed in Storage Manager

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

No probs Dave, thanks for so far.

from synology_hdd_db.

zcpnate avatar zcpnate commented on September 22, 2024

This appears to be the same as my open issue #132. Reverting to 7.2u1 does consistently fix it but am now stuck on that dsm version.

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

EDIT Don't worry about these commands. See my later comment here.

What do the following commands return:

grep "e10m20-t1" /run/model.dtb

grep "power_limit" /run/model.dtb

grep "100,100,100,100" /run/model.dtb

get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_nvme DS1821+

get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

@zcpnate @cfsnate

Yes, this is the same problem. I was going to reply to issue #132 once @RozzNL had confirmed the fix is working.

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

Just for the sake of testing, did your commands before i edited the script:

What do the following commands return:

grep "e10m20-t1" /run/model.dtb
Binary file /run/model.dtb matches
grep "power_limit" /run/model.dtb
Binary file /run/model.dtb matches
grep "100,100,100,100" /run/model.dtb
Binary file /run/model.dtb matches
get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_nvme DS1821+
yes
get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+
yes

@RozzNL @zcpnate @cfsnate

I just realised why the edited model.dtb file didn't work. If you have syno_hdd_db scheduled to run run at start-up or shutdown it will replace the edited model.dtb file with the one for 7.2 update 1... which is not what we want.

The syno_hdd_db.sh that I have scheduled to run at boot-up has the check_modeldtb "$c" lines commented out. For the E10M20-T1 you want to change line 1335 from check_modeldtb "$c" to #check_modeldtb "$c"

After editing syno_hdd_db.sh redo the steps in this comment.

Commented out, reapplied the modeldtb and applicable commands, rebooted.
The modified script with commented out check is run at shutdown, after boot up, still no drives in Storage Manager. πŸ‘Ž

EDIT:
I doublechecked that i use the modified model.dtb file you gave me, dates+size are the same as your modified file.

EDIT2:
I do run the syno_hdd_db.sh with the -nfr option btw

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

Try disabling the schedules for syno_hdd_db and leaving it disabled, then run this command
set_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+ no

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

Can you tell me what these commands return:

synodisk --enum -t cache

udevadm info --query path --name nvme0

udevadm info --query path --name nvme1

udevadm info --query path --name nvme2

udevadm info --query path --name nvme3

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

Disabled schedule, ran command, rebooted, nothing changed in Storage Manager.

Can you tell me what these commands return:

synodisk --enum -t cache
Nothing returned
udevadm info --query path --name nvme0
/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
udevadm info --query path --name nvme1
/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1
udevadm info --query path --name nvme2
/devices/pci0000:00/0000:00:01.3/0000:0f:00.0/nvme/nvme2
udevadm info --query path --name nvme3
/devices/pci0000:00/0000:00:01.4/0000:10:00.0/nvme/nvme3

EDIT:
Looking at your command, and looking in the file "adapter_cards.conf" i see:
[E10M20-T1_sup_nic] and
[E10M20-T1_sup_nvme] and
[E10M20-T1_sup_sata] and
DS1821+=yes, but also lower in the list
DS1821+=no

There are multiple references for the same DS.....not only for the DS1821+ but also other DS`s

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

I don't understand why synodisk --enum -t cache is not returning anything.

Are there any nvme erros if you run:
sudo grep synostgd-disk /var/log/messages | tail -10

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

I don't understand why synodisk --enum -t cache is not returning anything.

Are there any nvme erros if you run:
sudo grep synostgd-disk /var/log/messages | tail -10

2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme2n1
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme2n1
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme3n1
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme3n1

EDIT:
But running the (modified 1335 line) script
./syno_hdd_db.sh -nfr
Synology_HDD_db v3.1.64
DS1821+ DSM 7.2-64570-3
Using options: -nfr
Running from: /volume1/homes/admin/Scripts/syno_hdd_db.sh

HDD/SSD models found: 2
ST14000NM001G-2KJ103,SN03
ST16000NM001G-2KK103,SN03

M.2 drive models found: 2
Samsung SSD 970 EVO 1TB,2B2QEXE7
Samsung SSD 970 EVO Plus 2TB,2B2QEXM7

M.2 PCIe card models found: 1
E10M20-T1

No Expansion Units found

ST14000NM001G-2KJ103 already exists in ds1821+_host_v7.db
ST16000NM001G-2KK103 already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO 1TB already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO 1TB already exists in ds1821+_e10m20-t1_v7.db
Samsung SSD 970 EVO Plus 2TB already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO Plus 2TB already exists in ds1821+_e10m20-t1_v7.db

E10M20-T1 NIC already enabled for DS1821+
E10M20-T1 NVMe already enabled for DS1821+
E10M20-T1 SATA already enabled for DS1821+

Disabled support disk compatibility.
Disabled support memory compatibility.
Max memory already set to 64 GB.
M.2 volume support already enabled.
Disabled drive db auto updates.
DSM successfully checked disk compatibility.
You may need to reboot the Synology to see the changes.

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

Synology uses the same adapter_cards.conf on every Synology NAS model (even models without a PCIe slot). It lists which PCIe adapter cards each model supports.

Can you try deleting the line that says "DS1821+=no"

I also just noticed that every model that officially supports the E10M20-T1 is listed as yes in the [E10M20-T1_sup_sata] section. Even though Synology's information says the E10M20-T1 does not support SATA M.2 drives on any NAS model.

The Xpenology people just add the NAS model = yes under every section in adapter_cards.conf

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

Incorrect power limit number 4!=2

Okay, so it's not happy with the "100,100,100,100" power limit I added. Which is what the Xpenology people use. They add an extra 100 for each NVMe drive found.

What does the following command return?
/sys/firmware/devicetree/base/power_limit && echo

The only Synology models I own that have M.2 slots have:

  • DS720+ has a "11.55,5.775" power limit.
  • DS1821+ has a "14.85,9.075" power limit.

I tried "14.85,9.075,14.85,9.075" but it didn't work and I was getting errors in the log:

  • nvme_model_spec_get.c:81 Fail to get fdt property of power_limit
  • nvme_model_spec_get.c:359 Fail to get power limit of nvme0n1

Changing the power limit to "100,100,100,100" worked perfectly on my DS1821+ with 2 NVMe drives in a M2D18 adapter card, and none in the internal M.2 slots.

Before I changed the power limit to "100,100,100,100" the fans in my DS1821+ were going full speed and my NVMe drives and the M2D18 adapter card did not show in storage manager.

from synology_hdd_db.

zcpnate avatar zcpnate commented on September 22, 2024

I can assist with my working setup on 7.2u1 if there's some way to check existing working power limits?

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

I can assist with my working setup on 7.2u1 if there's some way to check existing working power limits?

7.2u1 didn't have a power limit. Synology added the power limit in 7.2u2

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

Ah...de [ ] are seperate sections, gottcha
EDIT:
All 3 sections regarding E10M20-T1 are already set to yes for the DS1821+ and i can`t find the DS1821+=no anymore....wonder if running your script changed this?

Incorrect power limit number 4!=2

Okay, so it's not happy with the "100,100,100,100" power limit I added. Which is what the Xpenology people use. They add an extra 100 for each NVMe drive found.

What does the following command return?

cat /sys/firmware/devicetree/base/power_limit && echo
14.85,9.075

The only Synology models I own that have M.2 slots have:

  • DS720+ has a "11.55,5.775" power limit.
  • DS1821+ has a "14.85,9.075" power limit.

I tried "14.85,9.075,14.85,9.075" but it didn't work and I was getting errors in the log:

  • nvme_model_spec_get.c:81 Fail to get fdt property of power_limit
  • nvme_model_spec_get.c:359 Fail to get power limit of nvme0n1

Changing the power limit to "100,100,100,100" worked perfectly on my DS1821+ with 2 NVMe drives in a M2D18 adapter card, and none in the internal M.2 slots.

Before I changed the power limit to "100,100,100,100" the fans in my DS1821+ were going full speed and my NVMe drives and the M2D18 adapter card did not show in storage manager.

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

@zcpnate

Can you check if smartctl --info /dev/nvme0 works for NVMe drives in 7.2u1

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

All 3 sections regarding E10M20-T1 are already set to yes for the DS1821+ and i can`t find the DS1821+=no anymore....wonder if running your script changed this?

Yes, running syno_hdd_db would have set it back to yes. But I don't think it matters.

from synology_hdd_db.

zcpnate avatar zcpnate commented on September 22, 2024

@zcpnate

Can you check if smartctl --info /dev/nvme0 works for NVMe drives in 7.2u1

ash-4.4# smartctl --info /dev/nvme0
smartctl 6.5 (build date Sep 26 2022) [x86_64-linux-4.4.302+] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

Read NVMe Identify Controller failed: NVMe Status 0x400b

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

Read NVMe Identify Controller failed: NVMe Status 0x400b

On 7.2u3 I get Read NVMe Identify Controller failed: NVMe Status 0x4002

Someone else on 7.2.1 gets Read NVMe Identify Controller failed: NVMe Status 0x200b

The only thing that's consistent is that smartctl --info for nvme drives doesn't work in DSM 7.2

from synology_hdd_db.

zcpnate avatar zcpnate commented on September 22, 2024

I tested a few other nvme drives and got 200b for my internally mounted nvme drives acting as a volume.

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

I too get the 0x200b

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

Can you try:

synodiskport -cache

synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1

synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1

synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1`

synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1

from synology_hdd_db.

zcpnate avatar zcpnate commented on September 22, 2024

Can you try:

synodiskport -cache

ash-4.4# synodiskport -cache
nvme0n1 nvme1n1 nvme2n1 nvme3n1

synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1

ash-4.4# synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1
E10M20-T1
Device: /dev/nvme0n1, PCI Slot: 1, Card Slot: 2

synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1

ash-4.4# synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1
E10M20-T1
Device: /dev/nvme1n1, PCI Slot: 1, Card Slot: 1

synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1`

ash-4.4# synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1
Not M.2 adapter card
Device: /dev/nvme2n1, PCI Slot: 0, Card Slot: 1

synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme3n1

ash-4.4# synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme3n1
Not M.2 adapter card
Device: /dev/nvme3n1, PCI Slot: 0, Card Slot: 2

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

I had a typo in the last command. It should return the same result, but the command should have been:
synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1

from synology_hdd_db.

zcpnate avatar zcpnate commented on September 22, 2024

I had a typo in the last command. It should return the same result, but the command should have been: synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1

ash-4.4# synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1
Not M.2 adapter card
Device: /dev/nvme3n1, PCI Slot: 0, Card Slot: 2

blind copy paste haha didn't catch that

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

While searching for what causes the "nvme_model_spec_get.c:90 Incorrect power limit number 4!=2" log entry I found 7.2-U3 has 2 scripts related to nvme power. I need to check if 7.2.1 still has those scripts.

syno_nvme_power_limit_set.service runs /usr/syno/lib/systemd/scripts/syno_nvme_set_power_limit.sh

/usr/syno/lib/systemd/scripts/syno_nvme_set_power_limit.sh then runs /usr/syno/lib/systemd/scripts/nvme_power_state.sh -d $dev_name -p $pwr_limit which sets the power limit to $pwr_limit for nvme drive $dev_name

It can also list the power states of the specified nvme drive. Strangely both my DS720+ and DS1821+ return the exact same power states even though both have different power_limits set in model.dtb

For me /usr/syno/lib/systemd/scripts/nvme_power_state.sh --list -d nvme0 returns:

========== list all power states of nvme0 ==========
ps 0:   max_power 4.70W operational enlat:0 exlat:0 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:0.3000W active_power:4.02 W      operational     rrt 0   rrl 0   rwt 0  rwl 0
ps 1:   max_power 3.00W operational enlat:0 exlat:0 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:0.3000W active_power:3.02 W      operational     rrt 0   rrl 0   rwt 0  rwl 0
ps 2:   max_power 2.20W operational enlat:0 exlat:0 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:0.3000W active_power:2.02 W      operational     rrt 0   rrl 0   rwt 0  rwl 0
ps 3:   max_power 0.0150W non-operational enlat:1500 exlat:2500 rrt:3 rrl:3 rwt:3 rwl:3 idle_power:0.0150 W     non-operational rrt 3   rrl 3   rwt 3   rwl 3
ps 4:   max_power 0.0050W non-operational enlat:10000 exlat:6000 rrt:4 rrl:4 rwt:4 rwl:4 idle_power:0.0050 W    non-operational rrt 4   rrl 4   rwt 4   rwl 4
ps 5:   max_power 0.0033W non-operational enlat:176000 exlat:25000 rrt:5 rrl:5 rwt:5 rwl:5 idle_power:0.0033 W  non-operational rrt 5   rrl 5   rwt 5   rwl 5


========== nvme0 result ==========
ps 0:   max_power 4.70W operational enlat:0 exlat:0 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:0.3000W active_power:4.02 W      operational     rrt 0   rrl 0   rwt 0  rwl 0

add to task schedule? false

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

/usr/syno/lib/systemd/scripts/nvme_power_state.sh --list -d nvme0
For me it returns:

========== list all power states of nvme0 ==========
ps 0:   max_power 7.50 W        operational     rrt 0   rrl 0   rwt 0   rwl 0
ps 1:   max_power 5.90 W        operational     rrt 1   rrl 1   rwt 1   rwl 1
ps 2:   max_power 3.60 W        operational     rrt 2   rrl 2   rwt 2   rwl 2
ps 3:   max_power 0.0700 W      non-operational rrt 3   rrl 3   rwt 3   rwl 3
ps 4:   max_power 0.0050 W      non-operational rrt 4   rrl 4   rwt 4   rwl 4


========== nvme0 result ==========
ps 0:   max_power 7.50 W        operational     rrt 0   rrl 0   rwt 0   rwl 0

add to task schedule? false

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

Yours looks more like I'd expect the output of a Synology command or script to look like.

Does this return an error? Or a list of nvme drives and power limits?

nvme_list=$(synodiskport -cache)
output=$(/usr/syno/bin/synonvme --get-power-limit $nvme_list)
echo ${output[@]}

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

Nope, i doesn`t return anything...

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

So what about these:

nvme_list=$(synodiskport -cache) && echo ${nvme_list[@]}

output=$(synonvme --get-power-limit $nvme_list) && echo ${output[@]}

synonvme --get-power-limit nvme0n1

synonvme --get-power-limit nvme1n1

synonvme --get-power-limit nvme2n1

synonvme --get-power-limit nvme3n1

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

All return with nothing πŸ‘Ž

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

Does synodiskport -cache

return:
nvme0n1 nvme1n1 nvme2n1 nvme3n1

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

nope, still returns nothing....and i still have the same errors btw
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme2n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme2n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme3n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme3n1

from synology_hdd_db.

zcpnate avatar zcpnate commented on September 22, 2024

FYI these power limit scripts do not exist on 7.2u1

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

I just installed a 3rd NVMe drive in one of the internal M.2 slots to see if I got 4!=2 in logs but I didn't.

@RozzNL
Can you do the following:

  1. Edit line 1334 in syno_hdd_db.sh to change this:
    • enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"
    • to this:
    • #enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"
  2. Delete the DS1821+=yes line under the [E10M20-T1_sup_sata] section in /usr/syno/etc.defaults/adapter_cards.conf

Also, have you ever run the syno_enable_m2_volume script? In DSM 7.2 update 3 or an earlier DSM version? I have NOT run it since updating to DSM 7.2 update 3.

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

If anyone wants a quick solution (instead of waiting for more trial and error testing) you can replace /usr/lib/libsynonvme.so.1 with the one from DSM 7.2-64570. I know this works in 7.2 update 2 and update 3. But I have no idea if it works in 7.2.1

  1. Download DS1821+_64570_libsynonvme.so.1.zip and unzip it.
  2. Backup existing libsynonvme.so.1 and append build and update version:
    • build=$(get_key_value /etc.defaults/VERSION buildnumber)
    • nano=$(get_key_value /etc.defaults/VERSION nano)
    • cp -p /usr/lib/libsynonvme.so.1 /usr/lib/libsynonvme.so.1.${build}-${nano}.bak
  3. cd to the folder where you unzipped the downloaded libsynonvme.so.1
  4. mv -f libsynonvme.so.1 /usr/lib/libsynonvme.so.1 && chmod a+r /usr/lib/libsynonvme.so.1

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

I just installed a 3rd NVMe drive in one of the internal M.2 slots to see if I got 4!=2 in logs but I didn't.

@RozzNL Can you do the following:

  1. Edit line 1334 in syno_hdd_db.sh to change this:

    • enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"
    • to this:
    • #enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"
  2. Delete the DS1821+=yes line under the [E10M20-T1_sup_sata] section in /usr/syno/etc.defaults/adapter_cards.conf

Also, have you ever run the syno_enable_m2_volume script? In DSM 7.2 update 3 or an earlier DSM version? I have NOT run it since updating to DSM 7.2 update 3.

Goodmorning all,
Performed the comment-out, removed line DS1821+=yes, rebooted, no change.

I have indeed performed the enable_m2_volume script, so i restored this back with running the script again, rebooted but i could not get back into the gui, had to reboot 2x again. after succesful reboot, still no change.

Checked the comment-out and line were still removed (just to be sure the m2_volume script had not interfered) and i did forget to run the hdd_db script after i edited it, duh...so reran all again to check, still no change

EDIT:
Checking some of the commands you sent previously.
synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1
Not M.2 adapter card
Can't get the location of /dev/nvme3n1
synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1
Not M.2 adapter card
Can't get the location of /dev/nvme2n1
synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1
E10M20-T1
Can't get the location of /dev/nvme0n1
synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1
E10M20-T1
Can't get the location of /dev/nvme1n1

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

I'm curious if the issues @RozzNL is having are the same for everyone.

@zcpnate what does synodisk --enum -t cache return for you?

Are you willing to try 7.2 update 3 again, but this time:

  1. Disable any scheduled scripts first (and leave them disabled).
  2. Update to 7.2 update 3.
  3. Check if synodisk --enum -t cache returns something.
  4. Download the following test script and model.dtb file and put them both in the same directory.
  5. Run the script and check storage manager.
  6. Reboot and check storage manager.

download_button

from synology_hdd_db.

zcpnate avatar zcpnate commented on September 22, 2024

Can get you this info tmw. I'd be willing to upgrade to u3 for testing as I'm pretty sure I can reliably downgrade to u1 in the event of no success. Also totally willing to jump on a zoom and we can debug in real time.

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

@zcpnate
Did you get a chance to try 7.2 update 3 with the model.dtb file from #148 (comment)

and line 1335 in syno_hdd_db.sh changed from this:
check_modeldtb "$c"

to this:
#check_modeldtb "$c"

Then reboot.

@RozzNL
There seems to be something really wrong with your DSM installation. Can you reinstall DSM 7.2 update 3 following the steps here: https://github.com/007revad/Synology_DSM_reinstall

Note skip steps 6 and 9 because you want DSM 7.2 update 1 to update itself to update 3.

Then do the same steps I outlined for above for zcpnate.

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

@zcpnate Did you get a chance to try 7.2 update 3 with the model.dtb file from #148 (comment)

and line 1335 in syno_hdd_db.sh changed from this: check_modeldtb "$c"

to this: #check_modeldtb "$c"

Then reboot.

@RozzNL There seems to be something really wrong with your DSM installation. Can you reinstall DSM 7.2 update 3 following the steps here: https://github.com/007revad/Synology_DSM_reinstall

Note skip steps 6 and 9 because you want DSM 7.2 update 1 to update itself to update 3.

Then do the same steps I outlined for above for zcpnate.

I downgraded to a full release of DSM_DS1821+_64570.pat, rebooted and was auto upgraded to latest release of DSM 7.2-64570 U3 after reboot.
As expected i did see the 2 internal NVMEs but not the E10M20-T1 card (so no 2x NVMEs and 10GbE)
Ran the syno_hdd_db.sh with 2 lines commented out from your ealier request (lines 1334 #enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA" and line 1335 #check_modeldtb "$c"

After reboot, i saw both internal NVMEs in Storage Manager and could online assemble them and i got the 10GbE back. Still no NVMEs on the E10M20-T1 card, but i think this is expected due to not running your script syno_create_m2_volume.sh right?

So awaiting your further orders :-)

I did run the following commands for you:
synodisk --enum -t cache
************ Disk Info ***************

Disk id: 1
Disk path: /dev/nvme2n1
Disk model: Samsung SSD 970 EVO 1TB
Total capacity: 931.51 GB
Tempeture: 41 C
************ Disk Info ***************
Disk id: 2
Disk path: /dev/nvme3n1
Disk model: Samsung SSD 970 EVO 1TB
Total capacity: 931.51 GB
Tempeture: 41 C

grep "e10m20-t1" /run/model.dtb
returns nothing
grep "power_limit" /run/model.dtb
Binary file /run/model.dtb matches
grep "100,100,100,100" /run/model.dtb
returns nothing
get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_nvme DS1821+
yes
get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+
retuns nothing, but is commented out in hdd_db

udevadm info --query path --name nvme0
/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
udevadm info --query path --name nvme1
/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1
udevadm info --query path --name nvme2
/devices/pci0000:00/0000:00:01.3/0000:0f:00.0/nvme/nvme2
udevadm info --query path --name nvme3
/devices/pci0000:00/0000:00:01.4/0000:10:00.0/nvme/nvme3

sudo grep synostgd-disk /var/log/messages | tail -10
2023-10-07T11:36:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info
2023-10-07T11:36:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme0n1
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme0n1
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1

/sys/firmware/devicetree/base/power_limit && echo
-ash: /sys/firmware/devicetree/base/power_limit: Permission denied

smartctl --info /dev/nvme0
smartctl 6.5 (build date Sep 26 2022) [x86_64-linux-4.4.302+] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

Read NVMe Identify Controller failed: NVMe Status 0x200b

synodiskport -cache
nvme2n1 nvme3n1

synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1
E10M20-T1
Can't get the location of /dev/nvme0n1
synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1
E10M20-T1
Can't get the location of /dev/nvme1n1
synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1
Not M.2 adapter card
Device: /dev/nvme2n1, PCI Slot: 0, Card Slot: 1
synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1
Not M.2 adapter card
Device: /dev/nvme3n1, PCI Slot: 0, Card Slot: 2

nvme_list=$(synodiskport -cache) && echo ${nvme_list[@]}
nvme2n1 nvme3n1
output=$(synonvme --get-power-limit $nvme_list) && echo ${output[@]}
nvme2n1:9.075 nvme3n1:9.075
synonvme --get-power-limit nvme0n1
returns nothing
synonvme --get-power-limit nvme1n1
returns nothing
synonvme --get-power-limit nvme2n1
nvme2n1:14.85
synonvme --get-power-limit nvme3n1
nvme3n1:14.85

EDIT:
uncommenting 1334 in syno_hdd_db.sh line to enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"
get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+
yes

udevadm info --query path --name nvme0
/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
udevadm info --query path --name nvme1
/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1

Maybe try your modified modeldtb as a next step?

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

/sys/firmware/devicetree/base/power_limit && echo
-ash: /sys/firmware/devicetree/base/power_limit: Permission denied

Sorry, that command should have been:
cat /sys/firmware/devicetree/base/power_limit && echo

udevadm info --query path --name nvme0 /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
udevadm info --query path --name nvme1 /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1

This appears to suggest I have the ports back to front. What does this command return:
syno_slot_mapping

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

/sys/firmware/devicetree/base/power_limit && echo
-ash: /sys/firmware/devicetree/base/power_limit: Permission denied

Sorry, that command should have been: cat /sys/firmware/devicetree/base/power_limit && echo

14.85,9.075

udevadm info --query path --name nvme0 /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
udevadm info --query path --name nvme1 /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1

This appears to suggest I have the ports back to front. What does this command return: syno_slot_mapping

System Disk
Internal Disk
01: /dev/sata1
02: /dev/sata2
03: /dev/sata3
04: /dev/sata4
05: /dev/sata5
06: /dev/sata6
07: /dev/sata7
08: /dev/sata8

Esata port count: 2
Esata port 1
01:

Esata port 2
01:

USB Device
01:
02:
03:
04:

Internal SSD Cache:
01: /dev/nvme2n1
02: /dev/nvme3n1

PCIe Slot 1: E10M20-T1

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

Try these:

grep "e10m20-t1" /etc.defaults/model.dtb

grep "power_limit" /etc.defaults/model.dtb

grep "100,100,100,100" /etc.defaults/model.dtb

If all 3 of the above commands return "Binary file /etc.defaults/model.dtb matches" then run these commands:

chmod 644 /etc.defaults/model.dtb

cp -pu /etc.defaults/model.dtb /etc/model.dtb

cp -pu /etc.defaults/model.dtb /run/model.dtb

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

Try these:
grep "e10m20-t1" /etc.defaults/model.dtb

Returns nothing

grep "power_limit" /etc.defaults/model.dtb

Binary file /etc.defaults/model.dtb matches

grep "100,100,100,100" /etc.defaults/model.dtb

Returns nothing

Did not run below commands.

If all 3 of the above commands return "Binary file /etc.defaults/model.dtb matches" then run these commands:

chmod 644 /etc.defaults/model.dtb

cp -pu /etc.defaults/model.dtb /etc/model.dtb

cp -pu /etc.defaults/model.dtb /run/model.dtb

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

Download this zip file:
ds1821+_model_with_e10m20-t1.zip

Then

  1. Unzip it to a directory on the DS1821+
  2. cd to that directory.
  3. chmod 644 model.dtb
  4. cp -p /etc.defaults/model.dtb /etc.defaults/model.dtb.bak
  5. cp -pu model.dtb /etc.defaults/model.dtb
  6. cp -pu model.dtb /etc/model.dtb
  7. cp -pu model.dtb /run/model.dtb
  8. Reboot
  9. Check storage manager now shows the E10M20-T1 and it's NVMe drives.

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

Wil do above when i get back from work.

Edit:
Unzipped and copied model.dtb 3x als per instructions.
I did notice that /run/model.dtb gets rewritten at bootup? am i correct? i saw this due to the time stamp on the file, it had changed, the other 2 were still the same timestamp.
Again no NVME`s in Storage Manager (all are gone)

But:
grep "e10m20-t1" /etc.defaults/model.dtb
Binary file /etc.defaults/model.dtb matches
grep "power_limit" /etc.defaults/model.dtb
Binary file /etc.defaults/model.dtb matches
grep "100,100,100,100" /etc.defaults/model.dtb
Binary file /etc.defaults/model.dtb matches

syno_slot_mapping
System Disk
Internal Disk
01: /dev/sata1
02: /dev/sata2
03: /dev/sata3
04: /dev/sata4
05: /dev/sata5
06: /dev/sata6
07: /dev/sata7
08: /dev/sata8

Esata port count: 2
Esata port 1
01:

Esata port 2
01:

USB Device
01:
02:
03:
04:

Internal SSD Cache:
01:
02:

PCIe Slot 1: E10M20-T1
01:
02:

synodiskport -cache
Returns blank

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

I've been thinking this would be a lot easier if I had a E10M20-T1, not just for DSM 7.2 update 3 and 7.2.1 but for when new versions of the storage manager package are released. Where I live a E10M20-T1 costs 1/3rd the price of a DS1821+!?!?

So I have a question for those who have E10M20-T1. Do the included M.2 heatsinks come with sticky single use thermal pads? It looks like it would be hard to temporarily remove an M.2 drive for testing.

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

Dave, i have swapped my internal NVME’s multiple times with the ones on the E10M20-T1, totally no problems removing the heatsink.
And because i like to tinker and tweak as much as possible, i also installed 2 coolers on the internals NVME’s ([Gelid Solutions Icecap M.2 SSD Cooler)

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

totally no problems removing the heatsink

Thanks. I just bought a E10M20-T1 online and paid for express shipping. The online store's distribution center is only a few suburbs away from me so hopefully it will arrive quickly (they don't allow pick-up).

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

I just unpacked DSM 7.2 update 3 for all 113 Synology models that can use DSM 7.2

  1. 36 of them use a devicetree (model.dtb).
  2. 14 of those support E10M20-T1 and M2D20.
  3. All 14 use the same pcie_postfix = "00.0,08.0,00.0" and pcie_postfix = "00.0,04.0,00.0" (for both E10M20-T1 and M2D20).

This confirms that the pcie_postfix values that I used in the model.dtb file were correct.

I also noticed that those 14 models that support E10M20-T1 do not have SATA M.2 support enabled in model.dtb for the E10M20-T1, even though they all have E10M20-T1_sup_sata enabled in adaptor_cards.conf

This confirms that adding entries for SATA M.2 support in model.dtb won't make any difference.

I've compiled 2 new model.dtb files for you to try:

  1. model.dtb with power_limit = "100,100"; ds1821+_100x2.zip
  2. model.dtb with power_limit = "14.85,9.075"; ds1821+_14.85.zip

Unzip it to a directory on the DS1821+ then

  1. cd to that directory.
  2. chmod 644 model.dtb
  3. cp -pu model.dtb /etc.defaults/model.dtb
  4. cp -pu model.dtb /etc/model.dtb
  5. Check storage manager.
  6. Reboot.
  7. Check storage manager again.

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

I've compiled 2 new model.dtb files for you to try:

  1. model.dtb with power_limit = "100,100"; [ds1821+_100x2.zip]

Still only NVME`s internal in Storage Manager, before and after.

  1. model.dtb with power_limit = "14.85,9.075"; [ds1821+_14.85.zip

Still only NVME`s internal in Storage Manager, before and after.

Both files show same info below:
synodiskport -cache
nvme2n1 nvme3n1
grep "e10m20-t1" /etc.defaults/model.dtb
Binary file /etc.defaults/model.dtb matches
grep "power_limit" /etc.defaults/model.dtb
Binary file /etc.defaults/model.dtb matches
grep "100,100,100,100" /etc.defaults/model.dtb
Returns nothing

syno_slot_mapping
Internal SSD Cache:
01: /dev/nvme2n1
02: /dev/nvme3n1

PCIe Slot 1: E10M20-T1
01:
02:

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

grep "100,100,100,100" /etc.defaults/model.dtb
Returns nothing

Depending on which model.dtb file you were using you'd need to run either:
grep "100,100" /etc.defaults/model.dtb
or
grep "14.85,9.075" /etc.defaults/model.dtb

What do these 3 commands return:
ls -l /etc.defaults/model.dtb
ls -l /etc/model.dtb
ls -l /run/model.dtb

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

grep "100,100,100,100" /etc.defaults/model.dtb
Returns nothing

Depending on which model.dtb file you were using you'd need to run either: grep "100,100" /etc.defaults/model.dtb or grep "14.85,9.075" /etc.defaults/model.dtb

Gotcha.

What do these 3 commands return:
ls -l /etc.defaults/model.dtb
ls -l /etc/model.dtb
ls -l /run/model.dtb

ls -l /etc.defaults/model.dtb
-rw-r--r-- 1 Rozz users 3848 Oct 11 12:09 /etc.defaults/model.dtb
ls -l /etc/model.dtb
-rw-r--r-- 1 Rozz users 3848 Oct 11 12:09 /etc/model.dtb
ls -l /run/model.dtb
-rw-r--r-- 1 root root 3848 Oct 11 17:13 /run/model.dtb

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

Can you download this zip file:
ds1821+_model_with_e10m20-t1.zip

Unzip it to a directory on the DS1821+ then

  1. cd to that directory.
  2. chmod 644 model.dtb
  3. sudo chown root:root model.dtb
  4. cp -pu model.dtb /etc.defaults/model.dtb
  5. cp -pu model.dtb /etc/model.dtb
  6. Check storage manager.
  7. Reboot.
  8. Check storage manager again.

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

Will do when i get back home from work.

EDIT:
Internal NVME`s gone again in Storage Manager after reboot.

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

FYI Do NOT update to 7.2.1 yet.

I just updated to 7.2.1 and:

  1. My single internal NVMe drive was showing as critical because DSM thought there should be 2 NVMe drives.
    • This single NVMe drive was previously migrated from a DS720+ and "online assembled".
    • This time the online assemble was grayed out and the only option was Remove (which I did as there was no data on it).
  2. My E10M20-T1 and it's NVMe drives are missing.
  3. 7.2.1 updated itself to 7.2.1 Update 1 (so I don't know if this is a 7.2.1 or 7.2.1 Update 1 issue).

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

Which model.dtb file did you use? (Original? 100? 14,85?)
Cause i still do not see the drives in Storage Manager after performing your adapter_cards.conf in /etc and /etc.defaults
My LAN 5 never went away after the first time i used your syno_hdd_db script

So i`m glad you got it working at your side!!! Now my turn ;)
Still, we need to keep this working after an update, so not out of the woods yet.

EDIT:
I restored the model.dtb.bak from a couple of steps back and i now do see the internal disks, but still no E10M20-T1, i do however still have LAN 5.
So i`m guessing i still have a different setup then yours?!?

from synology_hdd_db.

zcpnate avatar zcpnate commented on September 22, 2024

If I'd thought of checking that "/usr/syno/etc/adapter_cards.conf" matched "/usr/syno/etc.defaults/adapter_cards.conf" and they both contained DS1821+ in the correct places I could have saved $300

I haven't attempted this yet but I did just shoot over a paypal donation to help with the cost of the card. Thanks for all your hard work!

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

I also just did a donation...totally forgot about it...
Dave, thank you very much for the work you have already done.

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

I used a model.dtb with "100,100,100,100" like the one in the first zip file ds1821+_model_with_e10m20-t1.zip

But the one I actually used also contains E10M20-T1, M2D20 and M2D18 in this file model.zip

  1. cd to the directory you extracted the downloaded model.dtb to.
  2. chmod 644 model.dtb
  3. sudo chown root:root model.dtb
  4. cp -pu model.dtb /etc.defaults/model.dtb
  5. cp -pu model.dtb /etc/model.dtb
  6. Reboot.
  7. Check storage manager again.

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

OK dave, i really dont know where it is going wrong for me?

*copied model.dtb (100,100,100,100 version from above comment) into /etc and /etc.defaults, both are root
*checked adapter_cards.conf in: /usr/syno/etc and /usr/syno/etc.defaults, both have: E10M20-T1_sup_nvme, E10M20-T1_sup_nic and even E10M20-T1_sup_sata are all 1821+=yes, both are root

  • did not run syno_hdd_db.sh script, this was performed only after the reinstallation of update 3 a few days ago.
  • no other scripts of yours.
  • LAN 5 never went away after the first run of syno_hdd_db.sh.
  • only the internal nvme disks are shown and i can perform an online assemble.
  • still no pcie card with discs.

So where is it going wrong?

EDIT:

But the one I actually used also contains E10M20-T1, M2D20 and M2D18 in this file model.zip

For me this removes my internal nvme drives.

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

But the one I actually used also contains E10M20-T1, M2D20 and M2D18 in this file model.zip

For me this removes my internal nvme drives.

Do the fans to run at full speed?

I'm not sure why it's different for you. I'll downgrade DSM to 7.2 update 3 and try it again and document the exact steps I do.

I did notice today that the values in /run/adapter_cards.conf did not match those in /usr/syno/etc.defaults/adapter_cards.conf

What does the following command return:
cat /run/adapter_cards.conf

I've spent the last few hours creating a test version of syno_hdd_db to do all the required steps, so we'll all be doing the exact same steps. But I'm momentarily stuck at trying to insert the power_limit into the model.dtb file.

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

No fans run normally, i use cool mode btw

cat /run/adapter_cards.conf

M2D20_sup_nvme=no
E10M20-T1_sup_sata=yes
E10M20-T1_sup_nic=yes
M2D17_sup_sata=no
E10M20-T1_sup_nvme=yes
M2D18_sup_sata=no
M2D17_sup_nic=no
M2D18_sup_nic=no
M2D20_sup_sata=no
M2D17_sup_nvme=no
M2D18_sup_nvme=no
FX2422N_sup_nic=no
FX2422N_sup_nvme=no
FX2422N_sup_sata=no
M2D20_sup_nic=no

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

I haven't forgotten you guys.

I've done a lot of testing, while documenting every change, and been running around in circles. At one point I replaced the E10M20-T1 with the M2D18 and spent 1/2 a day trying to get it working again then I noticed the M2D18 was not fully plugged into the PCIe slot!?!?

I also downgraded DSM from 7.2.1 update 1 to 7.2 update 3 which caused it's own issues so I was not sure if the issues were caused by parts of DSM being broken (Synology account, File Station, Schedules, packages etc). I solved that by downgrading to DSM 7.2 update 1.

My plan is to get both the M2D18 and E10M20-T1 working in DSM 7.2 update 1,

  • Then update to DSM 7.2 Update 3 and get them working.
  • Then update to DSM 7.2.1 and get them working.
  • Then update to DSM 7.2.1 Update 1 and get them working.

I want to get both cards working as Synology intended (for a cache) without running any of my scripts.

Because I got tired of copying and pasting dozens of commands every time I made a change and rebooted I've written a script that runs all the commands and outputs the results in a readable format.

https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_check.sh

FYI this is from immediately after reinstalling DSM and not running any scripts or editing anything:

root@DISKSTATION:~# /volume1/scripts/m2_card_check.sh

 Checking permissions and owner on model.dtb files
-rw-r--r-- 1 root root 3583 Jul 20 02:17 /etc.defaults/model.dtb
-rw-r--r-- 1 root root 4460 Oct 16 12:01 /etc/model.dtb
-rw-r--r-- 1 root root 3583 Oct 16 20:21 /run/model.dtb

 Checking power_limit="100,100,100,100" is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb

 Checking E10M20-T1 is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb

 Checking M2D20 is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb

 Checking M2D18 is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb

 Checking /run/adapter_cards.conf
E10M20-T1_sup_nic NOT set to yes
E10M20-T1_sup_nvme NOT set to yes
E10M20-T1_sup_sata NOT set to yes
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes

 Checking /usr/syno/etc.defaults/adapter_cards.conf
E10M20-T1_sup_nic NOT set to yes
E10M20-T1_sup_nvme NOT set to yes
E10M20-T1_sup_sata NOT set to yes
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes

 Checking /usr/syno/etc/adapter_cards.conf
All OK

 Checking synodisk --enum -t cache
************ Disk Info ***************
>> Disk id: 1
>> Disk path: /dev/nvme0n1
>> Disk model: WD_BLACK SN770 500GB
>> Total capacity: 465.76 GB
>> Tempeture: 30 C

 Checking syno_slot_mapping
System Disk
Internal Disk
01:
02:
03: /dev/sata1
04: /dev/sata2
05:
06: /dev/sata3
07: /dev/sata4
08:

Esata port count: 2
Esata port 1
01:

Esata port 2
01:

USB Device
01:
02:
03:
04:

Internal SSD Cache:
01: /dev/nvme0n1
02:


 Checking udevadm nvme paths
nvme0: /devices/pci0000:00/0000:00:01.3/0000:0d:00.0/nvme/nvme0
nvme1: device node not found
nvme2: device node not found
nvme3: device node not found

 Checking devicetree Power_limit
14.85,9.075

 Checking if nvme drives in PCIe card with synonvme
nvme0: Not M.2 adapter card
nvme1: Not M.2 adapter card
nvme2: Not M.2 adapter card
nvme3: Not M.2 adapter card

 Checking if nvme drives in PCIe card with synodisk
nvme0: Not M.2 adapter card
nvme1: Not M.2 adapter card
nvme2: Not M.2 adapter card
nvme3: Not M.2 adapter card

 Checking PCIe slot path(s)
[pci]
pci1="0000:00:01.2"

 Checking nvme drives in /run/synostorage/disks
nvme0n1

 Checking nvme block devices in /sys/block
nvme0n1

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

Sounds like a plan Dave, just do your thing.
Personally not in any hurry.
My idee was to use the internals as cache and the PCIE card as storage.
Am away from home a few days but i should be able to test some settings remotely if needed.

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

Synology made changes that added a power limit for NVMe drives in DSM 7.2 Update 2. There were no NVMe related changes in Update 3. So what works in update 2 also works Update 3.

Getting the M2D18 and E10M20-T1 working in DSM 7.2 Update 1 was easy.

But getting them working in DSM 7.2 Update 2 was a lot harder. I'm actually wondering if my DS1821+ was running Update 1 when I previously got the E10M20-T1 working. I wish I'd taken a screenshot of the DSM version together with storage manager.

The good news is I have my M2D18 and E10M20-T1 both working in DSM 7.2 Update 3. Note: I have not run any of my scripts yet because I didn't want to introduce any extra variables to the testing.

I also have not tested DSM 7.2.1 yet because rolling back to 7.2 update 1 was difficult.

M2D18 working in DSM 7.2 Update 3
m2d18_20231018-114916

E10M20-T1 working in DSM 7.2 Update 3
e10m20-t1_and_nic_20231018-122342

Can you do the following to test it:

  1. Download and run m2_card_fix.sh
  2. Reboot.
  3. If it didn't work, reboot again. See note.

Note: When I first ran m2_card_fix.sh and rebooted I found /run/adapter_cards.conf was missing. I created the missing file by hand but when I rebooted DSM replaced it... so rebooting a 2nd time should restore /run/adapter_cards.conf if it missing.

I only noticed /run/adapter_cards.conf was missing when I ran m2_card_check.sh and saw
ls: cannot access '/run/adapter_cards.conf': No such file or directory

If you have any issues please run m2_card_check.sh and reply with the output.

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

FYI I noticed that my NVMe drives sometimes changed their number.

The nvme drive in the internal slot 1 was nvme0 when the M.2 card was not being detected.

Then when the M.2 card was detected, the nvme drive in the internal slot 1 had changed to nvme1. And the nvme drive in slot 1 of the M.2 card was now nvme0.

When I had 3 NVMe drives installed the drive in the internal slot was nvme2. After removing one of the drives from the M.2 card the drive in the internal slot became nvme1.

So if you have 4 of the same model NVMe drives and run syno_m2_volume.sh to create a volume on the drives in the M.2 card it will be difficult to tell which drives are installed where. I will update syno_m2_volume.sh to show if the drive is in an M.2 card.

In the meantime you see where each nvme drives i located with:
syno_slot_mapping | grep -A 7 'SSD'

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

Synology made changes that added a power limit for NVMe drives in DSM 7.2 Update 2. There were no NVMe related changes in Update 3. So what works in update 2 also works Update 3.

Getting the M2D18 and E10M20-T1 working in DSM 7.2 Update 1 was easy.

But getting them working in DSM 7.2 Update 2 was a lot harder. I'm actually wondering if my DS1821+ was running Update 1 when I previously got the E10M20-T1 working. I wish I'd taken a screenshot of the DSM version together with storage manager.

The good news is I have my M2D18 and E10M20-T1 both working in DSM 7.2 Update 3. Note: I have not run any of my scripts yet because I didn't want to introduce any extra variables to the testing.

I also have not tested DSM 7.2.1 yet because rolling back to 7.2 update 1 was difficult.

M2D18 working in DSM 7.2 Update 3 ![m2d18_20231018-114916]

E10M20-T1 working in DSM 7.2 Update 3 ![e10m20-t1_and_nic_20231018-122342]

Can you do the following to test it:

  1. Download and run m2_card_fix.sh
  2. Reboot.
  3. If it didn't work, reboot again. See note.

Note: When I first ran m2_card_fix.sh and rebooted I found /run/adapter_cards.conf was missing. I created the missing file by hand but when I rebooted DSM replaced it... so rebooting a 2nd time should restore /run/adapter_cards.conf if it missing.

I only noticed /run/adapter_cards.conf was missing when I ran m2_card_check.sh and saw ls: cannot access '/run/adapter_cards.conf': No such file or directory

If you have any issues please run m2_card_check.sh and reply with the output.

WHOOOHOOO....
Screen_DS1821+_Ro
Now were getting somewhere Dave!
This was after running your fix script and only 1x reboot.
Since i am not at home right now, i am nog going to create storage pools just yet, but a question: do i need your other scripts to create a storage pool on the pcie card? want to run internal nvmes as cache (not yet decided if i want to use write/read or only read, and the nvmes on the pcie will bee raid 1 storage pool with 1x volume

EDIT:
For your info,

./m2_card_check.sh
DSM 7.2-64570 Update 3
2023-10-18 21:17:56

Checking support_m2_pool setting
/etc.defaults/synoinfo.conf: yes
/etc/synoinfo.conf: yes

Checking supportnvme setting
/etc.defaults/synoinfo.conf: yes
/etc/synoinfo.conf: yes

Checking permissions and owner on model.dtb files
-rw-r--r-- 1 root root 4460 Oct 14 13:17 /etc.defaults/model.dtb
-rw-r--r-- 1 root root 4460 Oct 14 13:17 /etc/model.dtb
-rw-r--r-- 1 root root 4460 Oct 18 21:00 /run/model.dtb

Checking power_limit="100,100,100,100" is in model.dtb files
All OK

Checking E10M20-T1 is in model.dtb files
All OK

Checking M2D20 is in model.dtb files
All OK

Checking M2D18 is in model.dtb files
All OK

Checking permissions and owner on adapter_cards.conf files
-rw-r--r-- 1 root root 3170 Oct 13 11:58 /usr/syno/etc.defaults/adapter_cards.conf
-rw-r--r-- 1 root root 3170 Oct 14 12:58 /usr/syno/etc/adapter_cards.conf
-rw-r--r-- 1 root root 286 Oct 18 21:00 /run/adapter_cards.conf

Checking /usr/syno/etc.defaults/adapter_cards.conf
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes

Checking /usr/syno/etc/adapter_cards.conf
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes

Checking /run/adapter_cards.conf
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes

Checking synodisk --enum -t cache
************ Disk Info ***************

Disk id: 2
Slot id: 1
Disk path: /dev/nvme0n1
Disk model: Samsung SSD 970 EVO Plus 2TB
Total capacity: 1863.02 GB
Tempeture: 47 C
************ Disk Info ***************
Disk id: 1
Slot id: 1
Disk path: /dev/nvme1n1
Disk model: Samsung SSD 970 EVO Plus 2TB
Total capacity: 1863.02 GB
Tempeture: 47 C
************ Disk Info ***************
Disk id: 1
Disk path: /dev/nvme2n1
Disk model: Samsung SSD 970 EVO 1TB
Total capacity: 931.51 GB
Tempeture: 38 C
************ Disk Info ***************
Disk id: 2
Disk path: /dev/nvme3n1
Disk model: Samsung SSD 970 EVO 1TB
Total capacity: 931.51 GB
Tempeture: 40 C

Checking syno_slot_mapping

System Disk
Internal Disk
01: /dev/sata1
02: /dev/sata2
03: /dev/sata3
04: /dev/sata4
05: /dev/sata5
06: /dev/sata6
07: /dev/sata7
08: /dev/sata8

Esata port count: 2
Esata port 1
01:

Esata port 2
01:

USB Device
01:
02:
03:
04:

Internal SSD Cache:
01: /dev/nvme2n1
02: /dev/nvme3n1

PCIe Slot 1: E10M20-T1
01: /dev/nvme1n1
02: /dev/nvme0n1


Checking udevadm nvme paths
nvme0: /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
nvme1: /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1
nvme2: /devices/pci0000:00/0000:00:01.3/0000:0f:00.0/nvme/nvme2
nvme3: /devices/pci0000:00/0000:00:01.4/0000:10:00.0/nvme/nvme3

Checking devicetree Power_limit
14.85,9.075

Checking if nvme drives in PCIe card with synonvme
nvme0: Not M.2 adapter card
nvme1: Not M.2 adapter card
nvme2: Not M.2 adapter card
nvme3: Not M.2 adapter card

Checking if nvme drives in PCIe card with synodisk
nvme0: E10M20-T1
nvme1: E10M20-T1
nvme2: Not M.2 adapter card
nvme3: Not M.2 adapter card

Checking PCIe slot path(s)
[pci]
pci1="0000:00:01.2"

Checking nvme drives in /run/synostorage/disks
nvme0n1
nvme1n1
nvme2n1
nvme3n1

Checking nvme block devices in /sys/block
nvme0n1
nvme1n1
nvme2n1
nvme3n1

Checking synostgd-disk log

Current date/time: 2023-10-18 21:17:57
Last boot date/time: 2023-10-18 21:17:00

No synostgd-disk logs since last boot

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

Excellent.

Interesting that you didn't need a 2nd reboot (as /run/adapter_cards.conf still existed).

do i need your other scripts to create a storage pool on the pcie card?

For NVMe drive in a PCIe card you need Synology_M2_volume to create the storage pool and then do an online assemble in storage manager. This is because storage manager won't let you create a storage pool on NVMe drives in a PCIe card. I should have see if I can get around that..

Checking synodisk --enum -t cache
************ Disk Info ***************
Disk id: 2
Slot id: 1
Disk path: /dev/nvme0n1
Disk model: Samsung SSD 970 EVO Plus 2TB
Total capacity: 1863.02 GB
Tempeture: 47 C
************ Disk Info ***************
Disk id: 1
Slot id: 1
Disk path: /dev/nvme1n1
Disk model: Samsung SSD 970 EVO Plus 2TB
Total capacity: 1863.02 GB
Tempeture: 47 C
************ Disk Info ***************
Disk id: 1
Disk path: /dev/nvme2n1
Disk model: Samsung SSD 970 EVO 1TB
Total capacity: 931.51 GB
Tempeture: 38 C
************ Disk Info ***************
Disk id: 2
Disk path: /dev/nvme3n1
Disk model: Samsung SSD 970 EVO 1TB
Total capacity: 931.51 GB
Tempeture: 40 C

Synology really need to learn how to spell Temperature.

Your NVMe drives are a lot warmer than my little 500GB NVMe drives. My internal NVMe is 28 C and the one in the E10M20-T1 is 33 C (without the heatsink installed). Though I do have 2 empty bays next the internal M.2 slots and I currently have the cover off the NAS.

Checking syno_slot_mapping
PCIe Slot 1: E10M20-T1
01: /dev/nvme1n1
02: /dev/nvme0n1

I notice that nvme1 is in the E10M20-T1 M.2 slot-1 and nvme0 is in M.2 slot-2. I should have tested with 2 nvme drives in the pcie card as I'd expect nvme0 to be in the E10M20-T1 M.2 slot-1, like this:

01: /dev/nvme0n1
02: /dev/nvme1n1

I wonder if Synology screwed that up because all the NAS models that have E10M20-T1 in model.dtb have 08.0 for slot-1 and 04.0 for slot-2. I can switch them around but I'm not sure if I should.

Checking if nvme drives in PCIe card with synonvme
nvme0: Not M.2 adapter card
nvme1: Not M.2 adapter card
nvme2: Not M.2 adapter card
nvem3: Not M.2 adapter card

I've never seen synonvme correctly report that an nvme drive was in a pcie card. But it did alert me to the fact I had the wrong permissions set on /usr/syno/bin/synonvme

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

@RozzNL I just noticed your screenshot shows 3 available pools. You should be able to click on ... and select "Online Assemble".

from synology_hdd_db.

RozzNL avatar RozzNL commented on September 22, 2024

@RozzNL I just noticed your screenshot shows 3 available pools. You should be able to click on ... and select "Online Assemble".

I deleted all 3 available pools because that was me testing and changing everything before i reached out to you.
Already created a read cache on the internals, will use the m2_volume script for the pcie.
As for the temps, my cover is also off but the syno is placed in a relative warm place which does not help, but temps are well within operating range so not worried.

Do you already know if this will survive a dsm update?

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

Do you already know if this will survive a dsm update?

I assume you mean the M2 volume? After a DSM update you'll need to run m2_card_fix then maybe do an online assemble.

Once I update syno_hdd_db you won't need m2_card_fix.

from synology_hdd_db.

MirHekmat avatar MirHekmat commented on September 22, 2024

Hi @007revad , I have DS1821+ as well and I am having the same issue for me its just showing the E10M20-T1 in the info centre doesn't show the Lan5 at all nor the drives.

I am good around computers hardware installation etc but bad around coding, I see you have managed to help @RozzNL and fix the issue. Could you kindly summarise the correct and necessary steps to get this up and running? I have contacted Synology and they are saying return the card its not in the Synology compatible list. I would hate to return it as the Two extra cache/storage would be extremely helpful for my 4K video editing.
image
image

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

@MirHekmat

Which DSM version is your DS1821+ using?

I assume you've already run syno_hdd_db.sh since installing the E10M20-T1.

  1. Go to m2_card_fix.sh
  2. Download m2_card_fix.sh (see image below).
  3. Run m2_card_fix.sh with sudo -i
  4. Reboot.

download_raw

from synology_hdd_db.

MirHekmat avatar MirHekmat commented on September 22, 2024

Hey Dave,

The DSM is: DSM 7.2-64570 Update 1

I actually haven't run syno_hdd_db.sh

I read this post from top to bottom, I saw there were a few things that were done and some worked some didn't work, as the other person mentions. So I just wanted to start from where it actually mattered ( maybe it all matters, I am not sure).

So would you like me to start from here steps 1 to 4 and it should work
1.Go to m2_card_fix.sh
2.Download m2_card_fix.sh (see image below).
3.Run m2_card_fix.sh with sudo -i
4.Reboot.

or fir run syno_hdd_db.sh (where is this located kindly advise) then step 1~4.
Sorry I am a noob Thank you so much for all your help!

from synology_hdd_db.

007revad avatar 007revad commented on September 22, 2024

The DSM is: DSM 7.2-64570 Update 1

I actually haven't run syno_hdd_db.sh

For a DS1821+ with DSM 7.2-64570 Update 1 you only need Synology_HDD_db and the E10M20-T1 will work.

If you update to DSM 7.2-64570 Update 2 or Update 3 you'd also need the following steps.

  1. Go to m2_card_fix.sh
  2. Download m2_card_fix.sh (see image below).
  3. Run m2_card_fix.sh with sudo -i
  4. Reboot.

I will integrate m2_card_fix.sh into Synology_HDD_db soon so it will do it all.

I've also got to test DSM 7.2.1-69057 Update 1

from synology_hdd_db.

MirHekmat avatar MirHekmat commented on September 22, 2024

Thank you for clear instructions,
Also, as I had one delivery for the whole lot. Received 2x 16TB IronWolf at the same time as E10M20-T1. So I installed everything.
NAS is in the process of adding the 2x 16TB's to my SHR raid ( current progress @41.98%.)

Do you suggest I wait for NAS to finish this building to 100%. ETA is 2 days remaining. Or is it safe to run the code and reboot? You reckon it'll start back from where it left or I might lose some progress.
image

from synology_hdd_db.

MirHekmat avatar MirHekmat commented on September 22, 2024

Hey Mate, it worked great, thank you for your hard work. Chipped in a bit through PayPal.

Do I need to now maintain this through a task scheduler as you have mentioned.

Also Is this the correct guide for creating m.2 storage volumes? https://github.com/007revad/Synology_M2_volume

from synology_hdd_db.

MirHekmat avatar MirHekmat commented on September 22, 2024

There is also this one: https://github.com/007revad/Synology_enable_M2_volume

not sure of the difference, which one would work best? I have 4x Samsung m.2s 2 installed internally and 2x installed in E10M20-T1. would like to make 2x in E10M20-T1 to be use a storage if possible.

from synology_hdd_db.

MirHekmat avatar MirHekmat commented on September 22, 2024

@007revad Hey mate,

I think you are in Australia too, I have a 2nd DS1821+ coming. Do you have any cheaper/good thirdparty 10G card alternate I bought the TP-Link TX401 but I couldnt get that to work. dummy me didn't know back then Synology is very hardware restricted.

from synology_hdd_db.

bitcinnamon avatar bitcinnamon commented on September 22, 2024

Hi Dave @007revad

I really appreciate your kindly support and have already read from top to the end
while I tried all of these scripts but unfortunately no one works on my rig.

RS1221+, 
OS: DSM 7.2.1-69057 Update 1
RAM: Samsung 32g *2 // compatible 
HDD: 8*12TB SHR-2 //  mixed with Seagate, WD and Toshiba, they all works fine
Expansion Card: E10M20-T1
M.2 Drives: CFD Gaming 2TB *2  // recognized as: [Unknown CSSD-M2B2TPG3VNF]

In the initial state, DSM can setup my M.2 Drives as cache, without any warnings or errors.

  1. Ran [Synology_HDD_db] script, M.2 drives disappeared, 10GbE NIC still works.
  2. Then I tried [m2_card_fix.sh] mentioned above, nothing happens.

After this I full reset the rig (7.2.1u1), the M.2 drives appeared back in list.

  1. Ran [Synology_M2_volume] to create pools and reboot, gone again.
  2. Ran [Synology_HDD_db], nothing happens.
  3. Ran [m2_card_fix.sh], still.

Tried another full reset (7.2.1u1), can see M.2 drives with "unsupported" in the list.
When I click on 'Reset Drive', it turns green OK and is able to make cache.

  1. Ran [Synology_HDD_db], disappeared again.
  2. Ran [m2_card_fix.sh], still.

So I downgraded to DSM 7.2 U1-64570, prevented updating to 7.2u3.

  1. Ran [Synology_HDD_db], M.2 drives disappeared again.

Factory reset (7.2u1) and use [Synology_M2_volume] to create pools and reboot, also got disappeared.

  1. Ran [Synology_HDD_db], nothing happens.
  2. Ran [m2_card_fix.sh], nothing happens.

Another factory reset, use madam to create raids by SSH manually, after reboot they disappear againnnn.

  1. Ran [Synology_HDD_db], nothing happens.
  2. Ran [m2_card_fix.sh], nothing happens.

I posted my rs1221+'s synonvme and libsynonvme.so.1, hope can helps.
rs1221.zip

from synology_hdd_db.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.