Locked core clock speed is much better than power-limit, why is not included by default?

I’m experiencing difficulties setting this way so I tried the 670 directly on the GUI and it looks fine, I’m getting the better efficiency I could see until now (besides less 2Mh/s on performance).

520kH/watt

I don’t know what is better, to run with 2 more Mh/s or to be more efficient in therms of kH/watt.

My previous setting was this one:

Personally, I’d go for an extra 2Mh. You are losing 14Mh overall.
I’ve settled on -400, lgc 670 for my 3070s. Hashing well so far on 2550 MEM
image

Also I’ve put nvidia-smi commands in a script and set it as start script in T-rex with “script-start”:"/home/user/set-gpu-clock.sh"
Now it sets gpu clocks automatically every time I restart miner or reboot the rig and I don’t have to bother doing it manually.

Could you give me more detail for how to make your script. That interests me a lot. Thank you !

Similar. Better to set -offset and then from cli

eg, offset -500, cli 750, perfect result
If I enter directly 750 to web gui, results is 1.5-2mhs/per card lower

How it can be ?

Wow tried this on my 2060. Core at 1000 and dropped watts from 125 to 68 and same 30.5mh. Wild!

Effective GPU clocks = lgc value - offset value
In you first case Effective clocks = 750 - (-500) = 1250MHz, which is too much for 3070 and too low for 3060ti. For 3070 you shold aquire better results around 1080MHz effective GPU clock and for 3060ti - around 1425. But every silicon is different so YMMV.
When you enter 750 in GUI your offset =0, so
effective clock = 750 - 0 = 750 Mhz, hence lower hashrate.

It is strange, but setting offset -400 and lgc 680 gives 3-5W lower power consumption and the same hashrate compared to offset 0 and lgc 1080 (entering 1080 directly in the gui).
I’m suspecting that software wattmeter could be not correct in that specific case. I plan to test it with hardware wattmeter. It could be there is no difference in real power consumpion. Will check that theory when I get my hardware Kill-a-watt.

1 Like

How does your invalids go after doing this? And what’s your mem clocks?

Thanks for deep answer
Yap, it seems there is difference in set gpu clock in one step or in 2 steps (gui + after cli)
I’ll test 1080MHz in gui vs -500 gui at boot and 750 cli
or… eg -500 gui and after 1080MHz in gui too

Thanks a lot

In my quick try… (4xRTX3070)
PL: 130

  1. GUI 1250: Power (with cpu etc): 515W, HR: 252,2MHs
  2. GUI -500 + CLI 750: Power (with cpu etc): 503W, HR: 252,1MHs

Time waiting for stabilize ~> 1h
It’s seems it’s repeatable

1 Like

any tips for Asus TUF OC 3080?

Thanks in advance

Good thing you don’t have to use sudo in the Hive console

Hope this ‘‘secret’’ config file will come in handy. You can create a skeleton version by executing

echo 'DEF_FIXCLOCK=-200' > /hive-config/busid.conf

screenshot-20210324-2200

and then forget about lgc thing for good

Could you please elaborate on this a little?
Tried -200 and -400, but didn’t notice any effect.
Google knows nothing about this.

It’s great!!

You should apply OC from the Hive dashboard or run nvidia-oc from the console. Then examine the output of nvidia-oc

1 Like

Thank you! It works! Very much appreciated.

BTW seems that I can set only the same value for all cards. Is it possible to set different value for each card that way? Tried comma and space separated list, but no luck.

UPD: Nevermind. Figured it out. Had to modify nvidia-oc script a bit. Now it accepts array of values in DEF_FIXCLOCK.

1 Like

I did not understand anything ? a little more explanation, I put my values ditto and I see no change …

That file I mentioned before is a configuration file for nvidia-oс script. You should run nvidia-oс right after the updating

Could you please share you configs for the 2060 @SnowmanSimon ? How much on the cli and how much on the GUI?

Thanks man

After more than 50 days of trial , jumping from windows to hiveos
I was thinking that I found the right OC for my 9x RTX 3070

  • Absolute Core Clock 1100
  • Memory Clock 2600
  • PL 130w (I have 1 thirsty msi 2x ventus and a pny xlr8 which are set to 135W)
  • Delay in applying oc 60sec

I use AsRock H110BTC
2x Corsair 1200W platinum
Riser feed via 6pin …etc
T-rex miner.

All works well with no rejected shares
but the miner decides to restart after ~6h (max managed of 17h)

When I check the logs in shell tells my CUDA ERROR…lower oc … which I did… I left them at 1070 which worked for 14h, no issues after that… miner restart again.

Now I will trial the ss below …
What am I doing wrong…?