Memcpy Vs Assignment Notebook

Introduction

Several people have recently asked us questions similar to this one: “Can I use memcpy to copy an object of type string?”

Our first impulse is to say that if you have to ask, you shouldn’t be doing it — because you will get in trouble if you try. Nevertheless, the concepts behind the question are interesting enough to merit a closer look.

Briefly, the answer is that you can use memcpy safely to copy an object only if the object’s type is what is called a POD type, which stands for “Plain Old Data.” Because string is not a POD type, there is no guarantee that it is safe to use memcpy on a string.

What memcpy Does

As its name suggests, the memcpy function, from the C Standard library, copies memory:

void* memcpy(void* dest, const void* source, size_t n);

The source and dest arguments each refer to the initial byte of an n-byte region of memory; the two regions must not overlap. The memcpy function copies the memory in the source region to the memory in the dest region, obliterating whatever contents the dest region might have had previously. The memcpy function returns a copy of the dest pointer.

For example, suppose we write:

int x = 42; int y; memcpy(&y, &x, sizeof(int));

As it happens, int is a POD type, so it is safe to use memcpy on int objects. Accordingly, after executing these statements, y will have a value of 42, just as if we had executed:

y = x;

instead of calling memcpy.

The question, then, is what will happen if we write:

string s = "Hello, world!"; string t; memcpy(&t, &s, sizeof(string));

Will t have Hello, world! as its value or will the value be different? Will such a program even work at all?

The answer is that the program is not guaranteed to work, because string is not a POD type. Indeed, it is likely that this program fragment will cause a crash, as we shall see. The rest of this article explains what a POD type is and gives an idea of why memory-manipulation functions such as memcpy are generally safe only when applied to objects of POD type.

Fundamental Types

We can think of the memory in any computer that supports C++ as being composed of a collection of bytes, each of which contains an implementation-defined number of bits. All bytes contain the same number of bits. In a C++ implementation, that number must be at least eight; if the computer hardware does not support 8-bit or larger bytes, the C++ implementation must fake it in software. Most computers that support C++ have bytes that are exactly 8 bits long, but we have seen computers with bytes as long as 64 bits.

There are three important facts to know about bytes:

  1. A byte is the smallest addressable unit of memory. That is, every region of memory that it is possible to use pointers to define comprises an integral number of bytes. Accordingly, it is possible to use a byte address (which C++ uses the void* type to express) and an integer (which represents an object’s size) to refer to the memory that any object occupies.
  2. Every bit in a region of memory is part of exactly one byte. In particular, there is no information that might somehow fall into the cracks between the bytes [1].
  3. The sizeof operator, when given an object or a type as its argument, returns the number of bytes in an object of that type. All objects of a given type are the same size, so only the type matters.

These three properties imply that if x is an object, we can use ((void*)&x) and sizeof(x) together to represent the memory that x occupies. The question, then, is whether there is any more to x than the contents of its memory. It is that question that the POD notion exists to address: if a type is a POD type, the implication is that there is nothing more to an object of that type than the contents of its memory.

The fundamental types — that is, the arithmetic, enumeration, and pointer types — are POD types. In other words, the value of an object of a fundamental type depends entirely on the contents of the region of memory that corresponds to that object. It follows that using memcpy to copy an object of a fundamental type will copy that object’s value.

To see what’s happening more clearly, let’s look again at our earlier example:

int x = 42; int y; memcpy(&y, &x, sizeof(int));

Here, x and y are of fundamental type (int). They are therefore of POD type, so the bytes that constitute them completely determine their values. Moreover, every object of type int comprises sizeof(int) bytes.

When we call memcpy, it copies a number of bytes given by its third argument— in this case, sizeof(int) — from the region of memory occupied by x to the region occupied by y. Accordingly, the call to memcpy has the same effect as executing:

y = x;

because there is nothing more to x or y than the contents of the corresponding memory.

Structures

Let’s expand our universe by using memcpy to copy a structure. For example:

struct Point { int x, y; }; Point p, q; p.x = 123; p.y = 456; memcpy(q, p);

Will the call to memcpy still have the same effect as executing:

q = p;

or does the fact that Point is a user-defined type make memcpy not work?

The answer is that this structure is a POD type, because it is still so simple that its memory entirely determines its value, and therefore memcpy is safe to use in this context.

Structure Assignment

When we defined our Point structure, we did not give it an assignment operator. When we try to assign objects of such a type, the compiler treats such an assignment as being equivalent to assigning the objects’ data members. In other words, executing:

q = p;

has the same effect as executing:

q.x = p.x; q.y = p.y;

Because the x and y members of Point are of fundamental type, we can use memcpy to copy those members. Accordingly, it is also safe to use memcpy to copy the entire object.

Suppose, now, that we were to redefine Point to include an assignment operator:

struct Point { int x, y; Point& operator=(const Point&); };

We have deliberately omitted the definition of this assignment operator so that you won’t be tempted to think that you know what it does. It should now be clear that defining the assignment operator for Point has removed the guarantee that memcpy is safe to use on Point objects, because without seeing the definition of the assignment operator, we have no way of knowing that it has the same effect as calling memcpy.

Even if we define our assignment operator to have the same effect as the compiler-generated one:

Point& Point::operator=(const Point& p) { x = p.x; y = p.y; return *this; }

we should no longer consider it safe to use memcpy to copy a Point object, because doing so would rely on knowledge of the inner workings of the Point type, and those workings might change in the future.

In other words, we should be able to trust memcpy only when we are confident that using memcpy to copy an object will have the same effect as the assignment operator for that object. If the object, or any of its (non-static) data members, has a user-defined assignment operator, the compiler would have to read the definition of that operator to figure out whether it has the same effect as the compiler-generated assignment operator; such figuring in general is provably beyond the reach of any program. The moment any member acquires a user-defined assignment operator, this confidence vanishes. Therefore, a type that has a user-defined assignment operator in any of its data members is not a POD.

A More Precise Definition

We have seen two aspects of POD types: the fundamental types are POD types, and structures with user-defined assignment operators are not POD types. Here are the rest of the details:

  • Arithmetic types, enumeration types, and pointers (including pointers to functions and pointers to members) are POD types.
  • An array is a POD type if its elements are.
  • A structure or union is a POD if all of the following are true:
    • Every one of its non-static data members is a POD.
    • It has no user-declared constructors, assignment operators, or destructor.
    • It has no private or protected non-static data members.
    • It has no base classes.
    • It has no virtual functions.

The idea is that a class is a POD type if it has nothing to hide about its representation. Therefore, we can be sure that the value of an object of such a type is nothing more or less than the values of its components, so that copying the object is equivalent to copying its memory.

Discussion

Let us return to our original question: Is it safe to use memcpy to copy a string? We know that the string class has constructors, so it is not a POD. Therefore, the answer must be no. But what happens if we try anyway? Saying that a class is not a POD is saying only that memcpy is not guaranteed to work. It is not necessarily guaranteed to fail either. The question is whether copying the object is equivalent to copying its memory. To answer this question for the string class, we need to think about how it might be implemented.

A plausible implementation uses the string object itself to store an integer that represents the string’s length and pointer to dynamically allocated memory that contains the string’s characters. This integer and pointer are fundamental types, so surely it must be possible to use memcpy to copy them, right?

Wrong. Here’s the problem:

string s = "hello", t = "world"; memcpy(&s, &t, sizeof(string));

Before we call the memcpy, both s and t contain pointers that refer to memory somewhere:

Calling memcpy will overwrite the pointer in s with a copy of the pointer in t. Now, both pointers will point to the same memory, and the memory holding hello, to which s formerly referred, will be inaccessible and will therefore never be freed:

This program fragment will therefore leak memory. Moreover, when it comes time to destroy s and t, the effect of doing so is likely to be to try to deallocate the same memory twice, resulting in a crash.

This example should make it clear why the rules for defining POD types exclude types such as string. The moment a class author defines a constructor, destructor, or assignment operator, we can no longer be confident that copying an object of that class is equivalent to copying the object’s memory.

Summary

Functions such as memcpy, which deal with a class object’s memory directly, undercut the class author’s intentions. Doing so is dangerous unless the intentions are at so low a level as to make them impossible to undercut. Such a class is called a POD (Plain Old Data) to indicate that there is nothing more to the class than its contents. The moment data abstraction enters the picture, be it through constructors, destructors, assignment operators, base classes, or virtual functions, it is time to use only the operations that the class provides, and eschew low-level operations such as memcpy.

Note

[1] This fact is not as obvious as it sounds. For example, we have seen computers with 36-bit words in which the usual way to represent characters is to stuff five seven-bit characters into a word, with one bit per word left over. This implementation strategy fails on two counts: bytes contain fewer than eight bits, and there are bits that are not part of any byte.

A C++ implementation could solve the first problem by using eight-bit bytes, but that strategy would still leave unused bits in each word. Therefore, a correct solution would have to involve bytes with a size that divides evenly into 36, namely 9, 18, or 36 bits.

Andrew Koenig is a member of the Large-Scale Programming Research Department at AT&T’s Shannon Laboratory, and the Project Editor of the C++ standards committee. A programmer for more than 30 years, 15 of them in C++, he has published more than 150 articles about C++ and speaks on the topic worldwide. He is the author of C Traps and Pitfalls and co-author of Ruminations on C++.

Barbara E. Moo is an independent consultant with 20 years’ experience in the software field. During her nearly 15 years at AT&T, she worked on one of the first commercial projects ever written in C++, managed the company’s first C++ compiler project, and directed the development of AT&T’s award-winning WorldNet Internet service business. She is co-author of Ruminations on C++ and lectures worldwide.

Introduction

This article is generally written for beginners in the embedded world of microcontrollers. It may also benefit more experienced programmers who are coming from the safety net of writing code to run under an operating system like Windows, MAC OSX, Linux, FreeBSD or whatever.

It introduces the reader to the concept of "program context". Similar in many ways to threads and processes, however, a context is much simpler but, judging from the number of forum posts I see, often misunderstood.

So, lets move on step by step and get a good grasp of what's happening as we grow this example into a full blown debacle of how not to do it (ending of course with examples of best practise to avoid these pitfalls).

I'd also like to say at this point that it may be a long article but by the time you get to the end some of those evil printf()s will become saints again.

In the beginning

Let's start out with a new project in your compiler window. When you create a new project the Mbed cloud compiler conveniently gives a main.cpp that's already set and primed to trip you up and give you a bad day. Here's what it looks like:-

#include "mbed.h" DigitalOut myled(LED1); int main() { while(1) { myled = 1; wait(0.2); myled = 0; wait(0.2); } }

OK, so tripping you up and having a bad day is a bit harsh. The one thing this main.cpp does do is two fold. Having just opened your parcel and popped all the bubbles from the wrapper there's nothing better than seeing you new toy perform. And along with the simplicity of the main.cpp the hit is two fold, seeing it actually work and understanding how it worked. It doesn't get much simpler.

And as analogies go there's nothing worse than watching your children rip open their Christmas presents only to look up and see your spouse mouthing the words "Did you remember to buy the batteries?" If you did, fun is had by all, if you didn't, you know where the shed is.

Instant gratification is great and main.cpp does just that. So, without a do, lets move on and make a small modification to main.cpp, delete what you have and cut'n'paste the following into it:-

#include "mbed.h" DigitalOut led1(LED1); int main() { while(1) { led1 = 1; wait(0.05); led1 = 0; wait(0.95); } }

There are two changes here. The first is we renamed myled to led1. I don't know about you but all those LEDs are mine (or yours if you bought it!), I just like to know which is which. The second change is how the led1 is flashed. We switch it on, wait 0.05seconds, switch it off and wait 0.95seconds then repeat, forever. The idea here is while(1) { ... } should be around 1 second long and the led1 flashes briefly to mark the start of that second.

Go ahead and compile/run it just to make sure all is working well. What you should see is led1 flashing briefly once per second.

It's at this point I'm going to introduce a simple diagram that represents what's happening right now with your Mbed. Now I know it's not a proper system context diagram but none the less I'm going to call it a Context Diagram. Here it is:-

So, it's like a little graph. On the horizontal axis we have time, in seconds. The little blue boxes represent each time led1 flashes. I haven't give the vertical axis a name yet, more on that later. But the bar at the bottom represents a context. In this case it's "User Context" and that's code that executes inside your while(1) { ... } loop (including any functions and sub-functions that you call from that loop). Also, the diagram only shows the first 9 seconds. All the code I'm going to do is designed to mess things up within this 9 seconds. I'm going to deliberately exaggerate things to make sure of that!

OK, this is all about interrupts and making a mess of it so lets add our first interrupt and, err, make a mess of it. Here's the code, delete your main.cpp contents and paste this in:-

#include "mbed.h" Timeout to1; DigitalOut led1(LED1); DigitalOut led2(LED2); void cb1(void) { led2 = 1; wait(3); led2 = 0; } int main() { to1.attach(&cb1, 2); while(1) { led1 = 1; wait(0.05); led1 = 0; wait(0.95); } }

If you've just compiled and run that sample you'll have noticed that 2 seconds after reset, Timeout to1 was triggered and the callback function cb1() was executed. This function switched on led2, waited 3 seconds, switch it off again and returned. That bit should be fairly obvious. But what's not so obvious is what happened to led1 while led2 was on. Go on, press reset again and watch led1, what happens?

Yup, it doesn't flash! Now, lets return to our context diagram and "spell it out" with a picture:-

In the diagram above I have greyed out the "blue boxes" at t = 2, 3 and 4 to show that led1 didn't flash. As you can see, when timeout to1 triggered and cb1() was called your program's context switch from User Context to Interrupt Context. And along with it all the CPUs execution time was spent handing the Interrupt Service Routine (ISR). As a result, your while(1) loop was suspended and didn't execute. That's why led1 stopped flashing.

Now, you might have just done all this and be sat there thinking "well, it's obvious". But the point is, it's not as obvious as it may first appear. Sure it is here but that's because I put a whacking wait(3) in the callback. The point I'm trying to make is, spending too long inside callbacks, which are usually in interrupt context, will end up ruining your day. To some the penny about why printf() suddenly breaks your program may well be dropping. We'll come back to this later. Let's move on and make an even bigger mess just to really get home the nature of interrupts.

We're now going to add a second interrupt callback called to2 and give it it's own LED and it's own callback handler, cb2(). Here's the code, as usual, wipe the contents of your main.cpp and paste this in:-

main.cpp

#include "mbed.h" Timeout to1; Timeout to2; DigitalOut led1(LED1); DigitalOut led2(LED2); DigitalOut led3(LED3); void cb1(void) { led2 = 1; wait(3); led2 = 0; } void cb2(void) { led3 = 1; wait(3); led3 = 0; } int main() { to1.attach(&cb1, 2); to2.attach(&cb2, 3); while(1) { led1 = 1; wait(0.05); led1 = 0; wait(0.95); } }

Again we see things happening with the LEDs. As usual, while in interrupt context led1 stopped flashing as before and it stopped for 6 seconds this time. But notice something else here. Our new timeout t2 was set to trigger three seconds after reset. That was one second after to1 triggered. So why didn't led3 come on one second after by led2? What actually happened is led2 came on for 3 seconds and then, when it went off, led3 came on for 3 seconds! Lets take a look at our new context diagram and see what's going on.

What should be obvious from this now is interrupts don't interrupt currently executing interrupts. They "stack". As can be seen from the diagram above, although to2 triggered at t = 3 execution of cb2() didn't begin until t = 5 after cb1() completes and returns.

So, after all this exaggeration it's time to turn our attention to the real and programs you write. This first and most obvious rule is:-

Rule 1

Don't use wait() in callbacks.

wait() is really there to use during the initialisation phase of your program, the part before entering the while(1) loop. There are other times it's useful but restrict them to User Context. Just don't use wait() in an ISR. (Note, there does exist wait_us(), the wait for a specified number of microseconds. This isn't nearly as bad. Just use sensibly though and avoid wait() and wait_ms()).

But now it's time for rule number 2 and the less obvious one. Use printf() in a callback at your peril. Why? It's the Swiss army knife of debugging I hear you shout. Well, lets just disassemble printf(const char *format, ...);

  1. printf() doesn't have a magic buffer. It has to malloc() one.
  2. It has to guess at what size buffer to use, often twice the size of format, so that's a strlen() thrown in to get the length of format.
  3. If it overruns that 2x buffer it has to remalloc(), memcpy() and continue.
  4. Once it has something to send in steps the real killer hidden within printf(), it's putc(). printf() must loop over the buffer and putc() each character.
  5. And with every malloc(), there's a free() just to top it off.

As said above, putc() is the hidden killer lurking to trip the unsuspecting developer. Let's take a look at what putc() does:-

  1. Is UART THRE register empty (read LSR to find out)
  2. No? wait until it is.
  3. Yes, put the character/byte into THRE.
  4. Go back to 1 above until all characters are sent.

Now, out of the box Serial the baud rate is 9600baud. So the above loop takes approx 1ms for each loop, that's each character in printf()'s malloc()'ed buffer. Doesn't take long to realise that the more you ask of printf() inside a callback, the longer it takes for that callback to return and the longer you block the system.

Quite often a new programmer will find that it's ok initially. Their debugging pops out on their terminal. But as their project grows and the system is expecting to do more and more stuff eventually you reach a tipping point where you are simply spending to much time waiting inside a callback. As as shown early, interrupts stack. So when you return from a lengthy callback (thanks to printf()) you may well find yourself going straight back into it again because another callback has been made with another darn printf() in it too. Game Over.

Which brings us to rule number 2.

Rule 2

Avoid printf() in callbacks. If you do use them for debugging a quick variable, remember to remove the darn thing! And if you're entire program relies on a printf() in a callback by design rather than as a quick debug, think about redesigning your program to use better IO techniques. Which, lucky enough are coming up next!

Mitigating the issues

So it's time to look at ways of removing these unsightly warts from callbacks. As with any programming language or system, there's more than one way to do things. I'm just going to show one example of my preferred way of dealing with this issue.

This technique is basically all about trying to keep yourself in User Context for the longest period possible and only leaving it for very brief "exceptions".

In the above example so far what I was trying to do is this (this is my specification).

  1. led1: flash brief one per second, EVERY second.
  2. led2: On at t = 2 and off at t = 5
  3. led3: On at t = 3 and off at t = 6

Lets look at how I would approach this problem, here's the code:-

main.cpp

#include "mbed.h" Timeout to1; Timeout to2; DigitalOut led1(LED1); DigitalOut to1_led(LED2); DigitalOut to2_led(LED3); bool to1triggered = false; bool to2triggered = false; void cb1(void) { to1triggered = true; } void to1handle(void) { if (to1_led == 0) { to1_led = 1; to1.detach(); to1.attach(&cb1, 3); } else { to1_led = 0; to1.detach(); } } void cb2(void) { to2triggered = true; } void to2handle(void) { if (to2_led == 0) { to2_led = 1; to2.detach(); to2.attach(&cb2, 3); } else { to2_led = 0; to2.detach(); } } int main() { to1.attach(&cb1, 2); to2.attach(&cb2, 3); while(1) { if (to1triggered) { to1triggered = false; to1handle(); } if (to2triggered) { to2triggered = false; to2handle(); } led1 = 1; wait(0.05); led1 = 0; wait(0.95); } }

Paste that into your compiler and run it. You'll see it does in fact work. Keen eyed observers will however have noticed the huge bug in this program. We'll return to that shortly, I left it like it is not to add too much in one go.

So, what's new here? Mainly the introduction of two global bool vars to1triggered and to2triggered. The main point of these is when a callback is made we set these to true. We don't do anything else. That's fast, very fast. Compared to the 9 seconds of User Context we have, these callbacks are pretty much instant requiring just a few microseconds at most.

Now, in our User Context while(1) loop we test these bools. They are effectively acting as flags to your main program to tell it an event occurred that must be handled. So we do, we have functions dedicated to handling them. Notice each of these reschedules a new future callback based on whether the led is on or not. If the led is off, switch it on and reschedule a new timeout to switch it off later. Simples :) (Bad Russian accent optional).

Lets have a look at the context diagram for this program:-

Now this diagram isn't to scale. If it were you wouldn't be able to see to1/cb1 and to2/cb2, a pixel width of your screen is far to wide to represent the real time spent in the callbacks. Likewise if we scaled to the width they are shown the t = 9 would be by the bus stop somewhere down the your road, far to wide for your monitor :)

The important point here is the amount of time you have available in User Context. You could be using it for more useful things like calculating PI or searching for extraterrestrial life with SETI. The point is, it's your time. Those LEDs will happily get on with their task while you get on with other tasks.

So, let's come back to that bug I mentioned earlier. It's here:-

led1 = 1; wait(0.05); led1 = 0; wait(0.95);

The only reason this program works is because I was careful to line up the event times with each other. Change the values of those wait()s and it'll all fall apart. So the answer is, yes! More timers. Have you noticed the Mbed library says you can have as many as you want. Useful! Shortly I'll show you how to fix this. But first, as always, a word of advice. This sort of technique is useful for event driven systems. If your event handlers themselves start to get too long then you can run into trouble. For example, if you call a function to go help find another decimal place of PI chances are to1triggered and to2triggered won't be getting tested for true anytime soon. So think on when designing your program where you'll be spending time, where you need to service events etc.

Now, lets move on to the final example. This example handles the specification for the LEDs we outlined earlier AND lets you calculate PI without a care. Both will live in harmony. Whats more, we extend LED2 and LED3 so they repeat their sequence rather than just coming on once and never again.

main.cpp

#include "mbed.h" Ticker tled1on; Timeout tled1off; Timeout to1; Timeout to2; DigitalOut led1(LED1); DigitalOut to1_led(LED2); DigitalOut to2_led(LED3); void cb1(void) { if (to1_led == 0) { to1_led = 1; } else { to1_led = 0; } // Reschedule a new event. to1.detach(); to1.attach(&cb1, 3); } void cb2(void) { if (to2_led == 0) { to2_led = 1; } else { to2_led = 0; } // Reschedule a new event. to2.detach(); to2.attach(&cb2, 3); } void tled1off_cb(void) { led1 = 0; } void tled1on_cb(void) { led1 = 1; tled1off.detach(); tled1off.attach(&tled1off_cb, 0.05); } int main() { led1 = 1; tled1off.attach(&tled1on_cb, 0.05); tled1on.attach(&tled1on_cb, 1); to1.attach(&cb1, 2); to2.attach(&cb2, 3); while(1) { // Calculate PI here as we have so much time :) // It's like riding a bike with no hands, who's // steering?! :) } }

There is one point to note. The LPC17xx manual refers to "modes" and there are two of them, Thread Mode and Handler Mode. Basically, Thread Mode is "user context" and Handler Mode is "interrupt context". I just prefer the notion of "executing in a context". So if you are reading the manual then you'll know what thread and handler mode are.

And lastly, not covered here is interrupt priorities. The LPC1768 does allow for interrupts to take priority which does in fact allow one interrupt to preempt a currently executing ISR rather than stack amongst other tricks. However, along with SVC and PendSV I intend to cover these more advanced topics in a future article.

Hope you enjoyed all that and if you are here reading this line:-

  1. Thanks for taking the time!
  2. Enjoy your programming experience, it's supposed to be fun!

Report

18 comments on Regarding interrupts, their use and blocking:

Please log in to post comments.

Excellent introduction to the world of interrupts and calling contexts. Thanks very much for posting it and I look forward to any future articles.

Lol, nice analogies. Now if only this had been written like a month ago it would have been awesome but nonetheless it's a great article.

Really good write up, I look forward to your next one (would be useful to know how to assign interrupt priorities). One question - any reason why you detach and reattach the callbacks over using the ticker class?

Thanks for the comment. I am using the ticker class. I attach and reattach to save creating more tickers. I have a ticker, why create yet another one when I can reuse the one I already have?

Here's a question. What exactly happens if you were not to include the detach() function? Running the sample doesn't look like it affects that much. In a bigger program, what could happen?

Thanks, and great article!

Are there multiple levels of interrupt priorities?

Or, are all interrupts queued while one "callback" routine is running?

When are interrupts lost?

There are CMSIS functions that allow you to group interrupt sources into groups and have differing priorities for the group. I believe as a default interrupts initially all have the same priority thus they stack as described. Adding in differing priorities means a higher (iirc it's a lower number) can interrupt an already running interrupt. So, in the pictures above you have to imagine more levels of IRQ context.

As for when are interrupts lost? Well, only when your code looses them, otherwise they'll stack. I suppose it's possible you could run out of stack space if you tried hard or had buggy code. But you shouldn't be loosing them ;)

Andy Kirkham wrote:

Thanks for the comment. I am using the ticker class. I attach and reattach to save creating more tickers. I have a ticker, why create yet another one when I can reuse the one I already have?

Nice article, thanks. I don't understand what you mean in this comment when you say "reuse the one I already have", as cb1 and cb2 are controlled entirely by timeouts, and switching the led off is controlled by a timeout. So I can only see one use of the ticker, and that is in switching LED1 on and scheduling it to be switched off. In what manner are you reusing the ticker?

Is this a typo in the final piece of example code?

int main() { led1 = 1; tled1off.attach(&tled1on_cb, 0.05); tled1on.attach(&tled1on_cb, 1);

Shouldn't the one line be changed to the following?

tled1off.attach(&tled1off_cb, 0.05);

This is a minor typo and only causes the first blink to last 0.10 seconds instead of 0.5 seconds.

Now moving on to the topic that originally led me to this article, I'm having some problems with my InterruptIn pin not generating interrupts as reliably as I was hoping. I changed the interrupt priority levels so that my Ticker and Timer interrupts do not interfere with my external pin interrupts.

NVIC_SetPriority(EINT3_IRQn, 1); NVIC_SetPriority(TIMER3_IRQn, 4);

I am having the most noticeable trouble when code in my "User Context" calls /handbook/LocalFileSystem functions. Is this because of the blocking behavior mentioned on the handbook page?

Quote:

File access calls (fread, fwrite) will block, including interrupts, as semihosting is effectively a debug breakpoint

@Miles Frain

It's almost certainly LocalFileSystem that's getting in the way.

Hi Andy, Thank you very much for generously sharing your knowledge; it is obvious you have put a lot of thought and time into this and other postings on the site. Your work has and continues to be incredibly helpful and educational for me. Thanks again!

Mike

Great article Andy - love the way you used humor to spice things up.

"The only reason this program works is because I was careful to line up the event times with each other. Change the values of those wait()s and it'll all fall apart ..."

Please clarify - I can almost put my finger on the problem - but not quite.

Thx much.

Avnish

avnish aggarwal wrote:

"The only reason this program works is because I was careful to line up the event times with each other. Change the values of those wait()s and it'll all fall apart ..."

Please clarify - I can almost put my finger on the problem - but not quite.

It's working because the interval for the timeouts are increments of whole seconds, while the combined wait time for the two waits is exactly one second. This means that the timeouts will always interrupt the loop at the end of these waits. They line up perfectly.

Let's say that the second wait instead was ten seconds long (to make things obvious). In that case the loop will be interrupted during the ten second wait, call the interrupt functions and set to1triggered and to2triggered to true, and then return to the loop and continue waiting. It won't reach the if-statements that turns the LEDs on and off until the second wait is done, which will take several seconds.

Very good advice but one place where it might come a bit unstuck is serial I/O.

If I want to handle incoming serial data in the received data callback then it seems I need to use getc() in the callback, but that's a blocking function. Is it guaranteed that there will be a character waiting and a single call to getc() will never block or do I have to wrap it in "if readable()"?

Is there a nonblocking getc() and putc(), without having to resort to direct hardware access?

Thanks a lot for this. It helped me solve a problem that I had been having for two days using multiple tickers. In my case I wasn't using any wait() or printf() functions within the callback and there was plenty of time to do the operations but I think you simply cannot put too many operations within a callback function. (Board FRDM KL25Z)

Thanks for the article! It's great...and I always enjoy something written with a sense of humour!

Categories: 1

0 Replies to “Memcpy Vs Assignment Notebook”

Leave a comment

L'indirizzo email non verrĂ  pubblicato. I campi obbligatori sono contrassegnati *