{"id":612,"date":"2019-05-29T16:20:37","date_gmt":"2019-05-29T20:20:37","guid":{"rendered":"https:\/\/www2.whoi.edu\/site\/jameskinsey\/?page_id=612"},"modified":"2019-05-29T16:23:59","modified_gmt":"2019-05-29T20:23:59","slug":"navigation-and-state-estimation","status":"publish","type":"page","link":"https:\/\/www2.whoi.edu\/site\/jameskinsey\/research\/navigation-and-state-estimation\/","title":{"rendered":"Navigation and State Estimation"},"content":{"rendered":"\n\t<h1>Navigation and State Estimation<\/h1>\n<h2>ADCP-Aided Navigation (In Collaboration with Lash Medagoda)<\/h2>\n<p>This research develops estimation techniques for underwater navigation for autonomous underwater vehicles, as part of a Air Force Research Laboratory research grant. The ADCP-aiding methodology utilizes spatiotemporal models for the water currents, along with sensor modelling and data fusion of other sensors, allowing navigation in the absence of USBL, LBL, GPS or DVL, allowing mid-water column localization. In particular, a number of extensions are developed to improve navigation performance during missions characterized by prolonged time-scales and horizontal transits. This includes explicitly accounting for the temporal evolution of water currents while revisiting locations and during the re-observation itself, in addition to employing high fidelity spatial models to account for the horizontal water current field. \u00a0The method has been validated with results from the Sentry AUV from a hydrothermal vent flux estimation mission and a surveying mission at Deepwater Horizon, illustrating a potentially real-time dead-reckoning method with results for up to 20 hours.<\/p>\n<h2>Nonlinear State Observers for Underwater Robot Navigation<\/h2>\n<p>State estimation is a field of dynamic\u00a0systems in which a system model is combined with measurements to estimate the state of the system. I research\u00a0nonlinear dynamic model-based state estimators for underwater robot navigation which exploit knowledge of\u00a0the vehicle\u2019s nonlinear dynamics, the forces acting on the vehicle, and position and velocity measurements to\u00a0estimate the position and velocity of the underwater robot.\u00a0Initial results, published at the IEEE International\u00a0Conference on Robotics and Automation, derived a novel nonlinear observer (NLO), that uses the nonlinear representation of the vehicle dynamics, and reported initial experiments evaluating the NLO along with\u00a0the extended Kalman filter (EKF), a stochastic method that uses a linearized model of the vehicle dynamics.\u00a0My IEEE Transactions on Control Systems Technology publication\u00a0reports an extensive set of laboratory\u00a0experiments using a high-precision acoustic positioning system that provided ground truth measurements, and\u00a0the results show that the NLO is superior to the EKF with respect to a number of criteria including precision,\u00a0accuracy, and robustness. Furthermore, I report the results of January 2012 experiments undertaken at 2300m\u00a0depth with the Jason ROV that evaluate these methods using sensors possessing noise characteristics typical\u00a0for at sea operations. The results have a number of impacts on the navigation and operation of underwater\u00a0robots. First, it provides an alternate method for fusing sporadic noisy position measurements with frequent,\u00a0high-precision DVL measurements. Second, and more importantly, the NLO can estimate the position and\u00a0velocity of the robot in the absence of DVL measurements. For present operations, this provides robustness\u00a0against sensor failures. In the longer term, these methods enable underwater robots to operate beyond the\u00a0range of DVLs while maintaining high-precision navigation. This capability would advance our abilities to\u00a0use robots in the mid-water column \u2014 a region of intense interest to the oceanographic community where\u00a0AUVs and ROVs are rarely used at present.<\/p>\n<h2>In-situ Adaptive Identification of DVL-INS Alignment<\/h2>\n<p>One area where I use my\u00a0expertise in dynamic systems is the investigation of methods for identifying parameters, such as in situ sensor calibrations. For example, I have researched two methods for the in situ identification of the alignment\u00a0between Doppler velocity logs (DVLs) and inertial navigation systems (INSs) used in dead reckoning (DR)\u00a0underwater robot navigation. These techniques not only identify the alignment that minimizes the error between two independent three dimensional (3D) measurement sets but also ensure that the identified alignment\u00a0preserves the kinematic rotations between the 3D point sets. The first method,\u00a0an adaptive identifier constrained to the group of special orthogonal (SO(3)) matrices, provides an alignment in the form of a rotation matrix. The second method, by Jordan Stanway, a graduate student I co-advised,\u00a0uses geometric algebra\u00a0(GA). The theoretical derivation of these methods, along with proofs of their stability using Lyanunov theory, were first reported at the IEEE International Conference on Robotics and Automation. Both methods have been subsequently evaluated using data from a laboratory ROV and in at-sea experiments with AUVs, and these results are reported in the\u00a0IEEE Transactions on Robotics. These papers show that both the SO(3) and GA adaptive identifiers significantly reduce DR navigation errors thereby improving underwater robot navigation and the quality of oceanographic data obtained with these vehicles. The GA approach possesses a number of advantages including a more straight-forward derivation, simpler gain tuning, computational efficiency, and reduced data manipulation.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Navigation and State Estimation ADCP-Aided Navigation (In Collaboration with Lash Medagoda) This research develops estimation techniques for underwater navigation for autonomous underwater vehicles, as part of a Air Force Research Laboratory research grant. The ADCP-aiding methodology utilizes spatiotemporal models for the water currents, along with sensor modelling and data fusion of other sensors, allowing navigation&hellip;<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":603,"menu_order":1,"comment_status":"closed","ping_status":"closed","template":"","meta":[],"_links":{"self":[{"href":"https:\/\/www2.whoi.edu\/site\/jameskinsey\/wp-json\/wp\/v2\/pages\/612"}],"collection":[{"href":"https:\/\/www2.whoi.edu\/site\/jameskinsey\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www2.whoi.edu\/site\/jameskinsey\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www2.whoi.edu\/site\/jameskinsey\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www2.whoi.edu\/site\/jameskinsey\/wp-json\/wp\/v2\/comments?post=612"}],"version-history":[{"count":2,"href":"https:\/\/www2.whoi.edu\/site\/jameskinsey\/wp-json\/wp\/v2\/pages\/612\/revisions"}],"predecessor-version":[{"id":614,"href":"https:\/\/www2.whoi.edu\/site\/jameskinsey\/wp-json\/wp\/v2\/pages\/612\/revisions\/614"}],"up":[{"embeddable":true,"href":"https:\/\/www2.whoi.edu\/site\/jameskinsey\/wp-json\/wp\/v2\/pages\/603"}],"wp:attachment":[{"href":"https:\/\/www2.whoi.edu\/site\/jameskinsey\/wp-json\/wp\/v2\/media?parent=612"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}