The seven articles I posted four years ago on the art of using videos to improve operations included no pointers on what to do with the videos once you have them. This concern may seem premature in a manufacturing world where video recordings of operations are still rare, process instructions are in dusty binders and obsolete, customization specs come in the form of all-uppercase text from a 30-year old dot matrix printer with a worn-out ribbon, engineering project records reside in individual employees’ laptops, and management expects IT issues to be resolved by implementing a new, all-in-one ERP system.
In everyday life, on the other hand, videos are already in common use to explain how to pry loose a stuck garbage disposal, remove a door lock, change a special bulb in car headlight, or neatly cut a mango into cubes. You just describe your problem in a Youtube search, and up come videos usually shot and narrated by handy amateurs, and sometimes pros. It is particularly useful for tasks involving motion with key points that are difficult to explain with words or still images. The manufacturing world will eventually catch up.
Let us assume that, as a pioneer, you have painstakingly accumulated detailed videos about your operations that you can use for internal training, technology transfer, or as raw materials for further analysis. You might have questions like “In this plant, how many different methods do we use to enlarge holes in aluminum plates?” and would like to query all the video segments that show such operations. And you would also like this system to keep your secrets from leaking.
The first challenge is to deal with the bulk of video data, in tens of megabytes per minute and orders of magnitude larger than text, sound and high-resolution still images. This restricts the kind of hardware and software that you can use. The second, even more daunting challenge is to index and organize this material for easy and precise retrieval. You want your query about hole enlargement methods to yield everything you have recorded on that subject and nothing else. You also want it easy to retrieve for authorized users and impossible for others.
Video Storage And Retrieval
For controlled, easy retrieval of data by authorized users, you use databases. Until about 2000, databases were primarily used for with numbers, text, or categories — that can be used for cataloging videos — but the advent of web services has changed this.
In a way, storing videos in a database is like storing entire books inside the card catalog of a library. With printed cards and books, it is physically impossible; with electronic catalogs of ebooks, still images, or videos, it is feasible and, as far as end-users are concerned, is done. You search and the book appears, the picture pops up, or the video starts streaming. This is the user experience on Google, Amazon, Facebook, Youtube, etc.
From a manufacturing perspective, what difference does it make where the videos of operations are stored? Many recommendations you find on line are to keep videos in a separate folder and have only links to them in your database, which is like keeping location and quantity data in your plant while storing parts in an outside warehouse. When you do this, you increase the risk of inaccuracies arising as a result of unreported moves in the warehouse and you lose the benefits of your plant’s security system. You need a separate one for the warehouse.
Back in your video repository, the videos themselves are unwieldy, enormous blobs of data. By storing them outside your database, you keep it small and responsive but you do lose useful services, like access control, with user roles and privileges, and with permissions granted for the data, for example on one product family but not another. If you keep your videos separate from the database you have to come up with your own method to keep them organized and secure.
You can also relocate a database as a whole when you upgrade your hardware. If the connection to the videos is by links to files outside your database, on the other hand, it’s extra work for you to you to make sure the links are not broken in the move.
If you are Google, you may decide that your own engineers can do a better job at developing these functions than any products on the market, and what they develop may itself become an industry standard. This has happened already, as web companies use a tool from Google called MapReduce for parallel, distributed processing of large data sets.
If you are a shoe manufacturer in Arkansas, on the other hand, what works for Google may be a costly mistake for you. You may be better off investing in a document database that is effective at handling blobs, and yes, it is the technical term used to designate these large objects with unknown internal structures. A clever programmer later decided to make “blob” stand for “Binary Large Object.” A blob can be a document in PDF, a photograph, or a video, and you attach it to an author name, a date, a revision number, other blobs it is related to, and annotations.
Indexing And Organizing Videos
The obvious model for a video library is Youtube, and it is where you find the instructional videos on all sorts of tasks, as mentioned earlier. You could upload your shop floor videos and make them private to a group of authorized users but these users would see them commingled with other videos they also have access to. Generally, Youtube provides more features to help video authors reach a large audience than to support the use of videos as a resource inside a work group.
Until May 2 of this year, Youtube let authors annotate videos, by which they meant marking up the videos, not gathering structured data about them. In any case, they no longer offer this. The reason they give is that this feature didn’t work on mobile devices, on which 60% of the viewing occurs and that the cards and end-screens they provide instead “generate more click-throughs,” which should endear them to Marketing more than to Operations.
Your organization needs a private library of videos, organized to support operations, improvement activities, training, reference, and technology transfer, and Youtube is not designed for these purposes.
It is common for the systems used in manufacturing to hold the most interesting data in hard-to-reach places. In maintenance, for example, the easily accessible data is usually limited to administration: where and when incidents occur, the identity of the affected machine and the names of the assigned technicians. On the other hand, the technical meat of symptoms, diagnosis, and repairs, is only documented, if at all, in free-form comments. Todays’ text mining technology, however, is capable of extracting information from such comments and structure it is such a way that you can analyze it.
Mining the content of still images is also possible. Google does it for human faces. From a photograph of an operation, we could identify a location, a machine, a workpiece, a tool, or an operator. Automatic Video Content Analysis (VCA) today can detect motion and identify objects. It is astonishing technology, but still a far cry from automatically analyzing operator tasks the way we do it with the techniques described in the prior posts of this thread.
The Example of MBARI VARS
The best system I have seen to meet similar needs is not in manufacturing but for the videos of marine life collected by the Monterey Bay Aquarium Research Institute (MBARI) and organized in their Video Annotation and Reference System (VARS). It takes imagination to envision the transplantation of such a system to a manufacturing environment but access to it is both free and enjoyable.
According to MBARI, “Originally designed for annotating underwater video, VARS can be applied to any video dataset that requires constrained, searchable annotations.” This is an invitation to take a closer look.
Scale And Context
Upstream from VARS, MBARI has been using its own Automated Video Event Detector (AVED) to detect marine organisms in videos since 2008. It does not replace human annotators but saves their time by directing their attention to relevant video segments.
MBARI reports having annotated 23,000 hours of video recorded by its remotely operated vehicles (ROVs) and autonomous submarines (AUVs). By comparison, the feature films produced in Hollywood in 2016 added up to about 1400 hours and MBARI’s VARS contains the equivalent of about 16 years of Hollywood films.
A more relevant comparison is with the total amount of video time that would be needed to record all production operations in a factory. This is of interest strictly for capacity planning purposes, with no implied suggestion of undertaking this as a project, which would entail having a camera crew systematically roaming the shop floor to record every operation. It would not only be costly but also cause tension with camera-shy operators and yield many videos of limited value.
Instead, the repository should be for storage and retrieval of the videos you generate as part of improvement projects and use to analyze operations as described in the earlier posts. Over time, you can expect continuous improvement to provide videos of most if not all operations, and captured for a purpose.
If the factory employs 1,000 operators at a takt times of 1 minute, then recording each operator once would take about 1,000 minutes. If you take 10 repetitions of each operation, you get to 10,000 minutes or about 167 hours. Then, if each operation is improved twice a year, the video updates will produce 333 hours/year. At 1GB of storage space per hour of high-definition video, it will take 3 years to fill a 1TB disk, and you don’t absolutely need high definition.
The VARS Software
MBARI has made the VARS software available to download on SourceForge. Before rushing to try it, however, you should note that this is not a commercial app you install and start by double-clicking an icon. If you are a manager or a production engineer in a manufacturing company, don’t try to do it yourself. Instead, ask an IT engineer to do it for you and expect it to make more than a few minutes. Eventually, you have the following three applications:
- Query. The MBARI VARS query application produces tables of annotations about the videos collected, including still images of the frames within the videos to which each annotation is attached, and its location within the video. This is the application you use to retrieve all the video segments containing shrimp, or showing a hole in an aluminum plate being enlarged. The other applications are there to make it possible. The following is an example showing a vampire squid shot at a depth of >700m in 2000.
- Knowledgebase. The MBARI VARS knowledge base holds data about species, genus, habitat, behaviors, predators and preys, etc. for all the species that have been observed, with links to drawings, photographs, and videos. In a manufacturing setting, it would contain metadata like the nomenclature of products, processes, operations, materials, operator skills, key points, etc., with corresponding links to media.The application lets you view and manually edit this data, but does not appear to support any bulk upload or linkage to other systems that already have at least some of it, like the systems used in manufacturing for ERP or process planning. Being the system in which MBARI has accumulated this data over time, MBARI VARS has not needed this kind of integration, but a manufacturing application is unthinkable without it.
- Annotation. The annotation interface lets you attach structured data to a video or to a frame grabbed from a video, including timestamps, species observed, descriptions, observer ID, ship ID, camera direction, etc. This is similar to the information collected by time study software like Dartfish for sports, Timer Pro for manufacturing, or the academic ANVIL.These tools are focused on the job of analyzing one video, as opposed to organizing the results of analyzing hundreds or thousands of them. If, however, you have already analyzed videos with some of these tools, you would want to upload the results, which this software won’t let you do. It should also be noted that the term “video annotation” is more often used in the sense of marking up videos than in the sense of attaching structured data.
MBARI VARS offers a glimpse of what can be done, helps you understand the requirements for a video repository of your manufacturing processes, but does not meet all these requirements. It is also a mature system, based on technology choices made a decade ago when some of the currently popular document databases didn’t exist.
The seven articles on the art of using videos to improve operations from 2013 were:
- Overview and motivation
- Management preparation
- Shooting shop floor videos
- Watching as a team
- Watch it in fast motion
- Quick simograms
- Detailed review of process segments