They will have programmed motivations to various stimuli. And if you make the AI advanced enough to self-correct, learn, etc, it could build upon its existing index of motivations/reactions to react in ways it deems fit within its programming.
Something akin to free will... or at least the ability to push the boundaries.
Yeah that's the problem really; in order to really do things they'll have to be capable of learning, adapting and developing skills and abilities that they might not have at that time.
Which means it might be real easy to make slip up's or leave aspects of their programming to "interpretation".