Why:
In visualization tool, we should check whether the agent arrive the
target viewpoint. We need to calculate the distance between the GT
viewpoint & the predicted viewpoint but it's difficult to calculate the
distance without the simulator (we run the visualization tool on Jupyter
notebook which is not in the docker container so we cannot use the
simulator)
How:
After getting the result which gather from the env. We should run the
eval_metrics() to get the success rate, FOUND score..., etc. So we get
the "success" after eval_metrics() and log it in the predicted file so
that the visualization tool can get the "success" status in the
predicted file.
- situation: in visualization tool, some False Negative case(receive
adversarial instruction but return True(found)) will get the
found_objid but None. So it will raise a Key error interrupt.
- Solution: found that there is a condition function in agent_obj.py
return NOT_FOUND when detecting the found_objId is -1 but it doesn't
detect None so that it always return FOUND when the found_objId is
None. It doens't follow the rules. I add a new condition to check
whether the found_objId is -1 or None
- Other: For run the testing scripts, the user can shoose to replace the
exists output file.