We can check the CNTFRQ_EL0 to see the frequency of the generic timer. In kirin960, I tested that the frequency is 1.92MHZ. May I ask whether this value can be changed by the programmer or not?
Anyone’s idea may be really valuable! Thanks!
This kind of feature is SoC dependent and I doubt it would be possible to change that other than in EL3… So AFAIK this is not supported. But can I ask you why you want to change the frequency?
Thanks for response and sorry for my late notice
I agree with you that this feature is depend on SoC so that may I ask how and where can I get touch with some documents of Kirin960 Soc? I guess those materials are hard to search~
Well, that is because I’m curious about how to count the elapse time on arm platform. My naive understanding is that hardware timer is more realiable than software implementation. Thus, if we want measure the elapse time of an event, we can read the cntp_cval to caculate; if we want to get a precise timing notice, we can set the next_event of the local timer to invoke an interrupt. So the higher the frequency of the local timer, the higher the time resolution I can get and that’s why I want to increase it.
If I want to do something exactly after 100us, we can send 192 counts to cntp and wait for an interrupt, or, we use udelay(100). I trust the first method much more and I’ve heard that delay function always results some bias.
Well, how do you think about that, am i right? Is there any other software method for timing that is reliable?
Thanks a lot for your possible response!
All methods will have a bias, the question is how you quantify the jitter and what level of precision is required.
Personally I would expect the jitter caused by waiting for an interrupt to fire (e.g. wait for other interrupt handlers to stop running and then run a bunch of code to figure out which interrupt just went off) to have much great bias than udelay().
LOL! I get your point! Maybe I need to think more thx!