Hi there,
With the conversational programming and systems build entirely by AI we
have to think how to test those systems.
Testing is a crucial part for developers to know if the system is doing the
right thing.
How we enforce AI to double check created systems? Should we trust AI? If
so we don’t need testing.
Maybe we could have one AI to create system and second AI to test it?
Something to think about.
Pawel