RE:RE:As a researcher in the ML field, this is why im happy bolloks, i typed out a detailed response but the page refreshed. So im gonna summarize this one.
will the expansion beyond what we are currently testing for occur organically by the AI? (Or is this the arcitecture piece you mentioned?)
If the question is can the AI "teach-itself" to adapt to a new task using its previous knowledge, the answer is unfortunately, no.
The company would have to train a new network on new data to pivot to new AI-solutions, something ADK seem to be very comfortable with.
To be able to abstract old information and apply it to a new task is something very advanced about the human brain, and is the "holy grail" for artificial intelligence.
In the article google said, "After inputting 60,000 images, the research team found that continuing to add images no longer improved the algorithm." This statement surprised me because I was under the impression more data was better and that was the requirement for advancement.
The short response is that the amount of information that the network can "learn" from each image has diminishing returns. You can google "Backpropagation" if you want to learn more detail, but i don't want to get technical on a stock board. Thus the value of more information also has diminishing returns. Usually the quality of the data is more critical (as long as there is enough to train your network).