One notable technology feature missing from Tesla cars has been a 360-degree surround view. Owners have been asking for the feature for a while on twitter. And Elon Musk has finally listened.
According to Musk a vector space birds-eye view will be coming with the release of the full self-driving software. A vector space representation just means, rather than a picture or a video, the feature will use a mathematical representation of the space around the car to recreate an image of the surrounding.
Vector-space bird’s eye view coming with FSD— Elon Musk (@elonmusk) October 3, 2020
This feature being introduced with the full self-driving software release makes sense. Elon Musk in the past, talking about the new FSD software has said, it will be able to see in 4D. That means the 3 dimensions of space plus time.
In this view, the car using its cameras and other sensors will be continuously recreating a mathematical representation of the world. As a result, for Tesla to add a 360 surround-view, will just mean displaying information the vehicle already has.
Other features expected to come with the full driving software release are; autosteer on city streets, enhanced summon (the car will be able to pick you up from anywhere), and pothole avoidance. These will be added to the already extensive capabilities of the current FSD software.
Which includes navigate on autopilot, auto lane change, autopark, summon, and traffic light and stop sign control.
According to Elon Musk, the new features won’t come as an enhancement to the current ones, but, as part of a full architectural rewrite of the autopilot software. Speaking on the Battery Day event, Musk said “it is kind of hard for people to judge the progress of autopilot. I drive the bleeding edge alpha build of autopilot so I sort of have an insight into what is going on”
“Previously, about a couple of years ago, we were kind of stuck in sort of a local maximum. So we were improving but, the improvements kind of started tapering off”
“So we had to do a fundamental rewrite of the entire autopilot software stack, and all of the labeling software as well. We are now labeling in 3D video.”
“This is hugely different from previous times. we were labeling a bunch of single images from the 8 cameras. They will be labeled at different times by different people and some of the labels you literally can’t tell what it is you are labeling. This caused a lot of errors.”
“Now, with our new labeling tools, we label entire video segments. So you basically get a surround video of the thing to label -- Surround video and with time. It is now taking all cameras simultaneously and looking at how the images change over time”
“The sophistication of the neural nets in the car and the overall logic has improved dramatically”.
Elon Musk says a private beta of this amazing improvement should be released in 3 weeks’ time.
This should be really exciting to see. As for the private beta release. Tesla uses private beta testers to help the company iron out bugs before a software is introduced to the general public. These individuals are supposed to keep their findings secret and they sign a confidentiality agreement. However, a real-world driving demo always leaks within hours of a major update.
And as soon as those videos come out, we will keep you posted on how these features function in the out in the world.
So what do you think? Are you happy 360-degree surround-view is finally coming to Teslas? And how about the full self-driving software? Are you excited to see its capabilities as much as I am? Let me know your thoughts down in the comments below.
For more information Checkout: Elon Musk says Tesla for sure will enter India, might be the location for the next Gigafactory. Also, see Why we think a Model S and X refresh will come next year.
Tinsae Aregay has been following Tesla and The evolution of the EV space on a daily basis for several years. He covers everything about Tesla from the cars to Elon Musk, the energy business, and autonomy. Follow Tinsae on Twitter at
@TinsaeAregay
for daily Tesla news.
Comments
This is both good and bad
Permalink
This is both good and bad news. It is bad news if all of the picture data and neural net info collected so far cannot be integrated into the new (4D) neural net learning model (I'm not sure that this is the case). But regardless, it is good news that Tesla is willing to radically redesign their FSD software with the greater goal of it being more accurate and working better.
Even if they can't use their
Permalink
In reply to This is both good and bad by DeanMcManis (not verified)
Even if they can't use their trainded models they probably still have the data to train their new neural nets.
True, it's likely that the
Permalink
True, it's likely that the core metadata will be common to both the old and new neural net systems, but it could take a long time to process and analyze that data if they are starting from scratch, rather than building on the years of work already done.
I guess here is where Dojo
Permalink
In reply to True, it's likely that the by DeanMcManis (not verified)
I guess here is where Dojo comes into play
But we still don't know if
Permalink
But we still don't know if Dojo is expanding off of the analytic models that the previous FSD neural network system used, or if it is starting over. They still will have the vast quantity of recorded metadata and video to feed Dojo. But how long that it will take to achieve their accuracy goals is less important than how much better Dojo performs than before.