-
Type: Bug
-
Status: Resolved (View Workflow)
-
Priority: Blocker
-
Resolution: Done
-
Affects Version/s: None
-
Fix Version/s: SEBA 2.0
-
Component/s: NEM
-
Labels:None
-
Story Points:2
-
Epic Link:
When ONOS is restarted the OLT Service fails to push down the service-instances again with:
2019-06-14T18:32:33.206422Z [info ] Processing event event_msg=<xossynchronizer.event_engine.XOSKafkaMessage instance at 0x7ff403d51128> step=KubernetesPodDetailsEventStep
2019-06-14T18:32:34.057892Z [error ] Exception in event step event_msg=<xossynchronizer.event_engine.XOSKafkaMessage instance at 0x7ff403d51128> step=KubernetesPodDetailsEventStep
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/xossynchronizer/event_engine.py", line 139, in run
).process_event(event_msg)
File "/opt/xos/synchronizers/volt/event_steps/kubernetes_event.py", line 60, in process_event
onos = KubernetesPodDetailsEventStep.get_onos(service)
File "/opt/xos/synchronizers/volt/event_steps/kubernetes_event.py", line 42, in get_onos
raise Exception('Cannot find ONOS service in provider_services of Fabric-Crossconnect')
Exception: Cannot find ONOS service in provider_services of Fabric-Crossconnect
2019-06-14T18:32:34.058726Z [info ] Processing event event_msg=<xossynchronizer.event_engine.XOSKafkaMessage instance at 0x7ff400dc8908> step=KubernetesPodDetailsEventStep
The issues are:
- SEBA-723 reversed the dependency between vOLT and ONOS
- helpers.py is not used to get the ONOS info in the kubernetes event
- Event steps don't retry so it's easy to miss the log message