Patent attributes
The adaptive artificial vision method comprises the following steps: (a) defining successive couples of timesteps (t−1, t; t, t+1; . . . ) synchronized by a clock (101), (b) comparing two successive images (It−, It; It, It+1, . . . ) from an input device (102, 103) at each couple of synchronized timesteps (t−1, t; t, t+1; . . . ) spaced by a predetermined time delay τ0 for obtaining a delta image Δt which is the result of the computation of the distance between each pixel of the two successive images (It−1, It; It, It+1, . . . ) in view of characterizing movements of objects, (c) extracting features from the delta image Δt for obtaining a potential dynamic patch Pt which is compared with dynamic patches previously recorded in a repertory which is progressively constructed in real time from an initial void repertory, (d) selecting the closest dynamic patch Di in the repertory or if no sufficiently close dynamic patch still exists, adding the potential dynamic patch Pt to the repertory and therefore obtaining and storing a dynamic patch Di from the comparison of two successive images (It−1, It; It, It+1, . . . ) at each couple of synchronized timesteps (t−1, t; t, t+1; . . . ), and (e) temporally integrating stored dynamic patches Di of the repertory in order to detect and store stable sets of active dynamic patches representing a characterization of a reoccuring movement or event which is observed. A process of static pattern recognition may then be efficiently used.