The context of the problem is that I have several robots located remotely which give their position (x,y coordinates) every x seconds and send it to a centralized remote server. The value of the seconds, x, depends on the update frequency of the robot, so it can be 60, 70, 80 or whatever seconds.
In this server, I take these measurements (x, y coordinates of each robot) and generate a Kalman filter to predict next steps, since the measurements are given each "too much" time and I need these predicted positions.
The Kalman filter has the equations:
x[k] = x[k-1] + v[k-1]*dt
v[k] = v[k-1]
The robots are linked by pairs, and the filter of each robot (inside a pair) is updated when a measurement arrives from any of the two robots.
Here comes the problem. The frequency of update in the filters depends on the time of the robots' measurements. For instance:
Two robots giving measurements every 10s and every 25s. The filters will be initialized at t=0s and then updated at t=10s, t=20, t=25s, t=30s, etc.
Two robots giving measurements every 3s and every 5s. The filters will be initialized at t=0s and then updated at t=3s, t=5, t=6s, t=9s, t=10s, etc.
Because of this different updating times, the filters of some pairs of robots work very well, but others do not.
The matrixes Q and R stays constant at any moment. For the Q matrix, the values where given acording to the error produced by the maximum aceleration that can be reached (the problem should be here).
How can I implement a filter (a class) that works for all of the pairs independently of the updating filter times and updated measurement times? Which is the dependence between the update time of the filter and the time when a measurement arrives? I mean, how does it affect quantitatively? Should I modify the matrixes Q, R and/or P according to the time diference? It is an open question since I am a bit lost. Any help would be highly appreciated.
No comments:
Post a Comment