The evolution of smart video and the edge
Brian Mallari, Director of Product Marketing, Smart Video, at Western Digital discusses the evolution of smart video and the edge. By 2025 the global market for video surveillance cameras will grow to nearly $50 billion, according to the latest estimates by IDC. As the demand for smart video grows, paralleled by an increase in the use of Artificial Intelligence (AI), this will drive the development of data architectures at the edge. AI is excellent at doing specific, narrow tasks incredibly well. The aim of AI is not to teach technology to see the world as humans do, but instead to enable computers to capture, analyse, and learn about the human world, in rapid and accurate ways. The profound value of AI comes from taking computer intelligence capabilities – such as object recognition, movement detection, and tracking or counting objects/persons – and using these in the right application. Given the utility of these applications, it’s not surprising that the amalgamation of video, artificial intelligence and sensor data is a hotbed for new services across industries. Larger and smarter use cases Smart video is a keystone of modern security and surveillance activity. However, it should also be recognised that the market is expanding through a growing amount of use cases. These include medical applications, sports analysis, factories, traffic management, and even agricultural drones. Intelligent technology is making these use cases “smart”, i.e. devices deploying intelligent insights. For example, in “smart cities”, cameras and AI analyse traffic patterns and adjust traffic lights accordingly to improve vehicle flow, reduce congestion and pollution, and increase pedestrian safety. Another example of this is “smart factories”, which implement the type of narrow tasks AI excels in, such as detecting flaws or deviations in the production line in real-time and adjusting to reduce errors. Smart cameras can be very effective in this use case at the level of quality assurance, keeping costs down through automation and earlier fault detection. Changes at the edge As smart video evolves, it’s developing in parallel to other technological and data infrastructure advancements, such as 5G. As these technologies come together, they’re impacting the architecture of the edge, and what we require from data storage. More specifically, they’re driving a demand for specialised storage. Here are some of the biggest trends currently developing: 1. Strength in numbers Having more cameras means there is more media-rich data to be captured and analysed. This means it can be used to train AI. Simultaneously, cameras are supporting higher resolutions (4K video and above). The more detailed and sharp the video, the more insights can be extracted from it, and thus the more effective the AI algorithms can become. In addition, new cameras transmit not just a main video stream but also additional low-bitrate streams, perfect for low-bandwidth monitoring and AI pattern matching. Some of the biggest challenges for these types of workloads is the fact that they’re always on. Especially necessary in the case of security, many smart cameras operate 24/7, 365 days a year, as is needed for their role. The challenge here is that storage technology must be able to keep up. One way in which storage has evolved to meet this challenge, is the development of the ability to deliver high performance data transfer speeds and data writing speed, to ensure high quality video capture. Furthermore, on-camera storage technology that can deliver longevity and reliability has become even more critical, in comparison to storage technology at a remote data centre. 2. The rich variety of endpoints The realm of security relies on more than just visual data. New types of cameras are being developed with new types of data to be analysed. Cameras can be found everywhere – atop buildings, inside moving vehicles, in drones, and even in doorbells. The location and form factor of smart cameras impacts the storage technology required. The accessibility of cameras (or lack thereof) needs to be considered – are they atop a tall building? Maybe amid a remote jungle? Such locations might need to withstand extreme temperature variations. For example, security drones monitoring a location of extreme heat. All of these possibilities need to be considered to ensure long-lasting, reliable continuous recording of critical video data. 3. AI chipsets Increasingly real-time decisions are being made at the edge at device level, due to improved compute capabilities in cameras. New chipsets are arriving for cameras that deliver improved AI capability, and more advanced chipsets offer deep neural network processing for on-camera deep learning analytics. AI keeps getting smarter and more capable. As the innovation within cameras continues, there is a rising expectation that deep learning – requiring large video data sets in order to be effective – will happen on-camera too, driving the need for more primary on-camera storage. Even for solutions that employ standard security cameras, AI-enhanced chipsets and discrete GPUs (graphic processing units) are still being used in network video recorders (NVR), video analytics appliances, and edge gateways to enable advanced AI functions and deep learning analytics. With NVR firmware and OS (operating system) architecture evolving to add such capabilities to mainstream recorders, the implications for storage are large, having to handle a much bigger workload. For example, there is a need to go beyond just storing single and multiple camera streams. Today, metadata from real-time AI and reference data for pattern matching needs to be stored as well. 4. Don’t say goodbye to the cloud The majority of the video analytics and deep learning for today’s smart video solutions is completed by discrete video analytics appliances or in the cloud. Similarly, broader Internet of Things (IoT) applications that use sensor data beyond video are also tapping into the power of the deep learning cloud to create more effective, smarter AI. To support these new AI workloads, the cloud has gone undergone a transformation. Neural network processors within the cloud have adopted the use of massive GPU clusters or custom FPGAs (field programmable gate array). They’re being fed thousands of hours of training video, and
The evolution of smart video and the edge Read More »