Patent attributes
A dual variational autoencoder-generative adversarial network (VAE-GAN) is trained to transform a real video sequence and a simulated video sequence by inputting the real video data into a real video decoder and a real video encoder and inputting the simulated video data into a synthetic video encoder and a synthetic video decoder. Real loss functions and simulated loss functions are determined based on output from a real video discriminator and a simulated video discriminator, respectively. The real loss functions are backpropagated through the real video encoder and the real video decoder to train the real video encoder and the real video decoder based on the real loss functions. The synthetic loss functions are backpropagated through the synthetic video encoder and the synthetic video decoder to train the synthetic video encoder and the synthetic video decoder based on the synthetic loss functions. The real video discriminator and the synthetic video discriminator can be trained to determine an authentic video sequence from a fake video sequence using the real loss functions and the synthetic loss functions. The annotated simulated video can be transformed with the synthetic video encoder and the real video decoder of the dual VAE-GAN to generate a reconstructed annotated real video sequence that includes style elements based on the real video sequence. A second neural network is trained using the reconstructed annotated real video sequence to detect and track objects.