I changed heat sink to something better

This is my setup
I had same heatsink on HiKey960 but it wasn’t enaught
but it looks like HiKey970 don’t generate so much heat as it use better lithography

I did many benchmarks and cpu is on max frequency without throttling.

How did you remove the heatsink?

pure strength :slight_smile:

I am very curious on the temps?
On the stock heatsink with a fan
I have all the cores pinned down at max freq and it touches 80c at times but never throttles.
{Interesting Screenshot}

compiling kernel didn’t really heat up SOC
as it didn’t use NEON

So try something that use NEON
and look for this(change freq of DDR ):
echo userspace > /sys/class/devfreq/ddr_devfreq/governor
echo 1866000000 > /sys/class/devfreq/ddr_devfreq/min_freq
echo 1866000000 > /sys/class/devfreq/ddr_devfreq/max_freq
and it looks like it heat up much much more when you change DDR freq to maximum
there is also list of ddr freq in /sys/class/devfreq/ddr_devfreq/ava…freq

try this
echo 99999999 > /sys/class/thermal/thermal_zone0/sustainable_power

#echo NO_ENERGY_AWARE > /sys/kernel/debug/sched_features

aah… almost forgot about ddr… now its nice and toasty, touching 90c

I am doing some floating point operations and I do have throttling with the default heat sink, as you can see in the image below. The graph shows power usage in milli wats during the computation part.

The stock hikey970 heatsink is only held in place by the thermal paste.
The paste is really thick on one side because the 2 chips it covers are different heights. If the heatsink was stepped to match the chip heights it would work a lot better.

I would estimate that there is a 3/32" of an inch or 2mm thick thermal paste on one chip. Thermal paste that thick isn’t good and may be part of the temp problems.

I love reading these free phrases on forums that explains nothing and shows nothing. Void with emptiness. I would like to buy several dozen cards, but before I want to be sure I can remove the heatsink. I tried different techniques and in the end I tore out the memory. Now, my card is dead and as on the internet the prices range from 100 € to 350 €, it makes me a little sore buttocks.

Sorry to hear you broke your hardware.

Well cooked thermal paste does ends up as a fairly strong cement and can be pretty tough to move. Pulling rather than rotating the heatsink is a mistake I’ve made to my cost in the past (and of course even rotating can do damage of things are really stuck).

Simply placing a fan towards the heat sink brought me pretty good results.
Still have to crunch the numbers, but I would say around 20% improvement in nasa parallel benchmark [1].

[1] https://www.nas.nasa.gov/publications/npb.html

1 Like

I suspect that you made more gain by cooling the BOARD than by the airflow across the cooler. Remember that its a PoP configuration, with the RAM ON TOP. The RAM is in a fairly non-conductive plastic package, and the heat sink is sticky-taped onto the top of that.

But down into the board, the lead balls conduct heat into several layers of copper that spread the heat all across the entire board.

I see you have a rock960 sitting next to it. Pretty amazing the difference of having an aluminum cased CPU conducting heat directly into a massive heat sink, isn’t it? Even though that CPU makes 2-3 times as much heat, it stays cooler, doesn’t it?

What you see in the picture is the first and only “cooling” solution that I tried. During previous tests I noticed that the bottom of the board gets hot enough so I decided to raise it when I placed the fan, this way airflow goes through the sink and bottom too.

Where did you got this info?

I don’t have temperature data on hand to deny or confirm, but most probably it’s a yes, just by touching you feel the difference.

What do you mean? That you don’t believe me?
This link has a labelled picture with the heatsink off;
https://club.huawei.com/thread-19017489-1-1.html

Otherwise, get a light and a magnifying glass – you can see the void between the two chips.

I find this info interesting and wanted to know where you got it.

Cheers!

Idle at 40C, but goes up to 80+ on load (and throttles to 2.06GHz in some situations). That is in performance mode. Couldn’t get the heat sink off, but the dual fans still take quite some off the passive cooler. “Board” fan runs at 3V fairly silent all 3.

1 Like

Interesting setup!
What are you doing here?

You will find much improved performance by removing that and instead mounting one fan to blow ACROSS the board. The CPU/RAM is connected in a PoP configuration, so for heat to dissipate through the heatsink means that it has to be conducted through the RAM, which is an insulating plastic package. You will find that a lot of heat is dissipated into the copper of the board itself, so cooling the board is more effective than cooling the RAM.

I wish I could. Can’t get the original passive cooler off the cpu/ram. Tried but but risking breaking it. With the passive heat sink it throttles constantly to 1.4GHz or even below. The secondary fan actually blows below the board - not much of a difference - really, but I already had the fan and it does help somewhat. At 3.3v barely noticeable. The fans on the CPU are actually RPi fans which fit perfectly. I saw that on another thread and had to try it out. Small and quiet (https://www.amazon.com/gp/product/B07D5WWNH6/ref=ppx_yo_dt_b_asin_title_o02_s00?ie=UTF8&psc=1)

The main feature feature of this board is, that it doesn’t only have 1 but 2 PICe expansion buses. The nvme is just that - attempts to use a nVME - x4 PCI adapter failed. It does read the nvem ssd, though. The internal GPU is quite quick, I finally got HDMI audio working. But not supporting a second screen and locked to 1080p and all the timing issues give me grief. I have a kernel which should go to 4096x2048 - probably needs a custom EDID file. No success,yet.

Then my main motivation was actually to get a external GPU working with this board.I was testing various extensions on the mini PCI bus and to my surprise all of them work. The GPU is killing all other PCI devices, though. What I’ve read its probably the mem map a GPU needs on the PCI bus which the mini pci can’t handle. I have a 4x USB 3.0 card, which actually is usefull - the internal connectors are useless because only 1 device on all 3 connectors actually work and if you plug in a mouse no thumb driver of HDD will work (must all be one speed). USB-C is the same - and not USB-C fast.

Of course, you look at a limited bandwidth of the HiKey’s PCI implementation. Its internal switch connects 3 devices. m.2, miniPCI and ethernet. And they all share the same 5Gbs bandwidth - no matter how many devices you attach.

The power block is just a simple 24pin ATX power header and drives the board and the PCI buses from 12v 10A max (120W) so that the board is not power starved if you for example plug in a GPU (which can go up to 75W - I only got me a GT710 which should be roughly 25W tops) + whatever else you plug into the PCI slots.

I have a fairly stable 4.9 kernel now (thanks to janrinze branch) and run Ubuntu 18.04LTS with Xfce4 on it. The PCI has some serious troubles. While I can get NVME and USB SSDs to work, they all crap out on sustained high bandwidth transfers - has nothing to do with my setup, though. This also happens with a SSD directly connected to the m.2 slot. So I am looking into this now (also have a 5.1 kernel with PCIe fixes I might try to port that over). A USB webcam in 2.0 never crashes - which tells me low speed transfers (max 480GBs) work, but when the bus reaches bandwidth limits it resets and finally gives up (timing troubles?).

Its a pity this board is so badly supported. Maybe it has hw bugs. For $300 it better works. It beats any other ARM SBC on the market - by at least factor 2. 6GB is great which no other has. One could build a desktop linux computer out of it - and in fact it is my main computer now. Why? Because I can. And its to precious to make this anothe arm node in my cluster…fun toy to play with.