How to improve clock tick precision?

All kernel questions! 2.4 or new 2.6 kernel, compiling, modules, panics, etc.

How to improve clock tick precision?

Postby marsCat » 31 Jan 2008, 08:56

Now the minimum interrupt time is 10ms, I want to get a 1ms interrupt, how?

Can I use the way of edit the macro definition in /include/asm/param.h and then recompile the kernel?
change #define HZ 100
to #define HZ 1000

I am not sure whether this way is OK, whether the HW can support this,below is the basic information of my environment:

yellow dog linux 2.4.22-2

***************************************************
bash-2.05b$ cat /proc/cpuinfo

cpu : 7455, altivec supported

clock : 1249MHz

revision : 3.3 (pvr 8001 0303)

bogomips : 1248.46

machine : PowerMac3,6

motherboard : PowerMac3,6 MacRISC2 MacRISC Power Macintosh

board revision : 00000001

detected as : 129 (PowerMac G4 Windtunnel)

pmac flags : 00000000

L2 cache : 256K unified

memory : 2048MB

pmac-generation : NewWorld

bash-2.05b$


Anybody can give me some suggestion?! Many thanks!!
marsCat
ydl newbie
ydl newbie
 
Posts: 2
Joined: 22 Jan 2008, 15:12

Postby NeoAmsterdam » 05 Feb 2008, 11:29

By shrinking the timeslice quantum task efficiency is reduced (as you've just increased the number of context switches tenfold).

Unless you have a specific reason to do so (and can explain in no uncertain terms why) leave well enough alone.

(And, no, changing a #define will not do it)
User avatar
NeoAmsterdam
ydl lover
ydl lover
 
Posts: 66
Joined: 19 Dec 2004, 12:52
Location: NYC

Postby marsCat » 21 Feb 2008, 04:29

For we wanna simulate the slot interrupt which needs 1ms Timer.
marsCat
ydl newbie
ydl newbie
 
Posts: 2
Joined: 22 Jan 2008, 15:12

Hmm... tricky.

Postby NeoAmsterdam » 26 Feb 2008, 22:06

If memory serves the standard timeslice quantum is 10ms, which isn't fast enough for your purposes. On the other hand, I know that C's CLOCKS_PER_TICK() could be changed through a #define, but in order for it to take effect you'd have to recompile a non-standard kernel (which is why I said a quick #define change wouldn't do the trick).

If it's feasible, you might want to consider emulating the clock by running a constrained virtual machine, or you might be able to fake a clock using an infinite simClock=simClock+simMillisecond loop. Even so, you'd still be interfacing with the hardware, which means either you would have to underclock the system to the 1990s or you'd need to learn an electronics workbench software suite like qcad or KCircuit.

You might want to throw this question at the YDL mailing lists - I'm sure someone will be able to offer better options than I can think of.

Best of luck.
User avatar
NeoAmsterdam
ydl lover
ydl lover
 
Posts: 66
Joined: 19 Dec 2004, 12:52
Location: NYC


Return to Kernel

Who is online

Users browsing this forum: No registered users and 1 guest