Sure, but your analogy fails there since it implies that the map contains the entire original work. AI models do not contain the entire original works of their training data. If they did they would be the most efficient storage methods ever devised. All those terabytes of training data can obviously not be squished 1 for 1 into the model that is only a couple of gigabytes at the higher end.
The AI models can, in many cases, reproduce large portions of the training data verbatim. (Some models have controls on top preventing this, but the underlying model has the data.)
And even if they can't reproduce the entire work, they're still derivative works of the training data.