Most precise timekeeping method for periodic event

I am trying to develop an application that must perform a periodic task every “x” nanoseconds, where “x” is usually around 50000 (50uS). I am currently using a approach similar to realtime:documentation:howto:applications:cyclic [Wiki] where the author uses clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, ... . It goes well in a instantaneous manner but after a few hours the accumulated error causes me trouble.

I was wondering if there is there is a more precise manner to time my events then using CLOCK_MONOTONIC using the hardware present on the Apalis iMX6 module. It might be ok for the granularity of this source to be lower if the accuracy presented by it is greater than the one provided by the clock_nanosleep .

Hello @denersc and Welcome to the Toradex Community!

I am trying to develop an application that must perform a periodic task every “x” nanoseconds, where “x” is usually around 50000 (50uS).

So you could also every X uS where X is usually 50, or not? What is your application?

It goes well in a instantaneous manner but after a few hours the accumulated error causes me trouble.

What is the error?

I was wondering if there is there is a more precise manner to time my events then using CLOCK_MONOTONIC using the hardware present on the Apalis iMX6 module. It might be ok for the granularity of this source to be lower if the accuracy presented by it is greater than the one provided by the clock_nanosleep .

What are you requirements? Which jitter can you accept on the time of your events? Which granularity and accuracy is needed?

Best regards,
Jaski

Hello @jaski.tx ! Thank you very much.

So you could also every X uS where X is usually 50, or not? What is your application?

Yes, being precise, X is all the time 50203.(fractional) nS. The application is such that, every period, a couple of bytes (204 bytes precisely) are written in an internal queue of the application. The fractional part is saved so when it adds to unity, it also increments the time for the next period.

What is the error?

An actual measure made, for instance, was that after about 3 hours, we wrote 100kBytes more than we should have. I interpret this fact as “our application is 25mS ahead in time after 3 hours”, considering the desired throughput, which is 32507936 bps.

I was unable to interpret if this advancement in time was due to, say, clock drift, or, granularity of the time the kernel can in fact work with or whatever else it might be.

What are you requirements? Which jitter can you accept on the time of your events? Which granularity and accuracy is needed?

The actual requirement is that the average bits/sec throughput in this queue be precisely 3250736bps. Jitter or spikes are to some extend acceptable.

When i asked the question, what i had in mind was “could i use some other time source to regulate my primary source?” i.e, a time source with maybe less granularity (say, mS) but that could be used to sense that my application was ahead in time.

I understand that i might be working on the limit of precision, nevertheless i decided to ask.

Thank you in advance,

Dener

I guess your first try at a rate-limitting algorithm is just plain too primitive. Maybe you should have a look at proper Linux TC stuff: