ICI team sales and services Cisco and Huawei network equipments. Most of our customers are medium and smaller service providers, who usually do not enjoy sufficient services. If you face similar challenges, talk with us and we might help you.
Symptoms: A vulnerability in MPLS LDP packet processing Cisco IOS XR could allow an unauthenticated, remote attacker to cause a reload of the MPLS LDP process on the affected device.
The vulnerability is due to improper processing of crafted MPLS LDP packet. An attacker could exploit this vulnerability by sending crafted MPLS LDP packets to be processed by an affected device. An exploit could allow the attacker to cause a reload of the MPLS LDP process on the affected device.
Conditions: Cisco IOS XR device configured to process MPLS LDP packets.
Workaround: Disable advertising of FECs with prefix length <=24
PSIRT Evaluation: The Cisco PSIRT has assigned this bug the following CVSS version 2 score. The Base and Temporal CVSS scores as of the time of evaluation are 4.3/3.6: http://tools.cisco.com/security/center/cvssCalculator.x?vector=AV:N/AC:M/Au:N/C:N/I:N/A:P/E:F/RL:OF/RC:C&version=2.0 CVE ID CVE-2015-4223 has been assigned to document this issue. Additional information on Cisco's security vulnerability policy can be found at the following URL: http://www.cisco.com/en/US/products/products_security_vulnerability_policy.html
[CTC interop] XML Client/CTC not able subscribe for Alarm Manager
Status:
Fixed
Severity:
1 Catastrophic
Description:
Symptom: Alarm-Subscription request by CTC/XML-Client fails with message stating Subscription already exists.
Conditions: Its a race condition where, AM process is sending alarms and at the very same moment there is a un-subscribe request comes in leading to deadlock between AM-XML process.
Workaround: Do a process restart of alarm-manager process and the xml process.
[R602] 3rd XR VM spawned after XR reload triggered by install failure
Status:
Fixed
Severity:
2 Severe
Description:
Symptom: Description:
After system boots up, all XR VMs went for reload due to install operation failure, then a 3rd XR was spawned unexpectedly, and failed to come up due to no enough resources, qemu crashed on host.
Conditions: Will occur only when XR reload and reconnection between RP0 and RP1 happen at the same time
Workaround: Reload the problematic RP
Further Problem Description: While performing the ISSU from 5247 to r601/r602 is performed the LC VM's did not come up and host crashed were witnessed.
install add failed, node failed to respond when completing disk checks
Status:
Fixed
Severity:
2 Severe
Description:
Symptom: Install add CLI fails with following message
Error: ERROR: Process 'insthelper' has been performing an operation for a period of time so that the node failed to Error: respond when completing the installation of packages within the system. Error: AFFECTED NODE(S): 0/RP1/CPU0 1/RP0/CPU0 1/RP1/CPU0
OR Error: ERROR: Node failed to respond when completing the disk checks. Error: AFFECTED NODE(S): 0/RP0/CPU0 0/RP1/CPU0
Conditions: None
Workaround:
we can recover from the issue by restarting insthelper process by using following CLI (admin)process restart
Further Problem Description: Install add operation fails with Node failed to respond system error as mentioned above. once this issue happens all subsequent add operations will fail with same error. only known workaround to recover the problem is to restart insthelper process
core file not copied over from uvf to xr when uvf core generated
Status:
Open
Severity:
2 Severe
Description:
Symptom: Under uvf, kill the vpe_main process, triggered the dpa reset: kill -9
From the dpc_rm_srv trace: RP/0/RP0/CPU0:pe2#show control dpc rm trace error 3 wrapping entries (2112 possible, 64 allocated, 0 filtered, 3 total) May 27 15:29:52.043 dpc_rm/error 0/RP0/CPU0 t3961 Received 0 bytes from DPA0. This indicates orderly shutdown of the socket by the DPA. May 27 15:29:52.044 dpc_rm/error 0/RP0/CPU0 t3961 DPA0 : DPA has failure reset, DPC/DPA connection is reset May 27 15:29:52.044 dpc_rm/error 0/RP0/CPU0 t3961 Aborting communication with DPA0. RP/0/RP0/CPU0:pe2#run
The core file is generated under uvf, but never copied over to xr: [uvf:/misc/scratch/core]$ pwd /misc/scratch/core [uvf:/misc/scratch/core]$ ls -altr total 87228 drwxr-xr-x 4 root root 4096 May 24 16:11 .. -rw-rw-rw- 1 root root 19696 May 27 15:38 vpe_4319.by.3.20160527-153822.uvf.db994.core.txt -rw-rw-rw- 1 root root 89290424 May 27 15:40 vpe_4319.by.3.20160527-153822.uvf.db994.core.gz drwxr-xr-x 2 root root 4096 May 27 15:40 . [uvf:/misc/scratch/core]$
[xr-vm_node0_RP0_CPU0:/misc/scratch/core]$pwd /misc/scratch/core [xr-vm_node0_RP0_CPU0:/misc/scratch/core]$ls -altr total 304 drwxr-xr-x 8 root root 4096 May 26 12:00 .. vm_node0_RP0_CPU0.2e2f6.core.txt drwxr-xr-x 3 root root 4096 May 27 09:30 . -rw-r--r-- 1 root root 0 May 27 09:30 .clrcxt drwxr-xr-x 2 root root 4096 May 27 09:30 .1 [xr-vm_node0_RP0_CPU0:/misc/scratch/core]$
RP/0/RP0/CPU0:pe2#show context
node: node0_RP0_CPU0 ------------------------------------------------------------------ No context RP/0/RP0/CPU0:pe2#
But there is no core file generated.
Conditions: image: Refpoint = calvados/release@main/20 Built By : lucaslee Built On : Tue May 24 10:46:35 EDT 2016 Build Host : ott-lb-028 Workspace : /workspace2/lucaslee/xr-dev-15I Source Base : ios_ena Devline : xr-dev.lu%EFR-00000325669 Devline Type : ACME Lineup xr-dev EFR-00000325669 Lineup
XR show platform return UNKNOWN for all hardware after ISSU
Status:
Fixed
Severity:
2 Severe
Description:
Symptom:Show platform output shows node state and admin state as UNKNOWN for all cards. Conditions:After ISSU, RP card reload or VM switchover, show platform display UNKNOWN for all cards. Workaround:None
SC cards take more than 2 hours to go to XR RUN state
Status:
Fixed
Severity:
2 Severe
Description: *
Symptom: After FCC reload (in a x+2 system) CRS-FCC-SC-22GE-B can take more than 2 hours to go to XR RUN state.
Conditions: XR 5.3.3 + hfr-px-5.3.3.CSCuz09063.pie SMU is installed and FCC rack is reloaded due to reboot SMU install or manual reload or any other reason
Workaround: None. SC card completes bootup and recovers after a while
Further Problem Description: Fabric card not coming up stuck on LRd query
dpc_rm has run out of thread resource with new client NSH is added
Status:
Fixed
Severity:
2 Severe
Description:
Symptom: After configuring bundle with static mac and lacp, the bundle is not up: RP/0/RP0/CPU0:pe2#sh ipv4 int br
Interface IP-Address Status Protocol Vrf-Name Bundle-Ether100 12.1.0.2 Down Down default
Port Device State Port ID B/W, kbps -------------------- --------------- ----------- -------------- ---------- Gi0/0/0/2 Local Configured 0x8000, 0x0000 1000000 Bundle is in the process of being replicated to this location Gi0/0/0/3 Local Configured 0x8000, 0x0000 1000000 Bundle is in the process of being replicated to this location
It seems dpc rm resource issue due to new feature nsh added in.
bundle configuration lost on interface after router reload
Status:
Open
Severity:
2 Severe
Description: *
Symptom: Bundle configuration is lost on interfaces after router reload.
The following error messages are seen: LC/0/7/CPU0:May 24 12:16:02.010 UAE: cfgmgr-lc[138]: %MGBL-CONFIG-4-FAILED : Some 'TenGigE0_7_4_0' Configuration could not be restored by Configuration Manager. To view failed config(s) use the command - 'show configuration failed startup' LC/0/7/CPU0:May 24 12:16:02.012 UAE: cfgmgr-lc[138]: %MGBL-CONFIG-4-FAILED : Some 'TenGigE0_7_0_0' Configuration could not be restored by Configuration Manager. To view failed config(s) use the command - 'show configuration failed startup'
Conditions: The issue is observed on MSC and MSC-B as well as FP-40 after reload of a CRS or LC reload. The probability to happen on MSC/FP-140 or MSC/FP-400 is low.
Workaround: Not available.
More Info: The following is displayed if show configuration failed startup is executed:
!! SEMANTIC ERRORS: This configuration was rejected by !! the system due to semantic errors. The individual !! errors with each failed configuration command can be !! found below.
interface TenGigE0/0/0/0 bundle id 1 mode active !!% 'CfgMgr' detected the 'fatal' condition 'This configuration has not been verified and can not be accepted by the system.' ! interface TenGigE0/0/1/0 bundle id 2 mode active !!% 'CfgMgr' detected the 'fatal' condition 'This configuration has not been verified and can not be accepted by the system.' ! interface TenGigE0/0/2/0 bundle id 1 mode active !!% 'CfgMgr' detected the 'fatal' condition 'This configuration has not been verified and can not be accepted by the system.' ! interface TenGigE0/0/4/0 bundle id 2 mode active !!% 'CfgMgr' detected the 'fatal' condition 'This configuration has not been verified and can not be accepted by the system.' ! interface TenGigE0/7/0/0 bundle id 1 mode active !!% 'CfgMgr' detected the 'fatal' condition 'This configuration has not been verified and can not be accepted by the system.' ! interface TenGigE0/7/1/0 bundle id 2 mode active !!% 'CfgMgr' detected the 'fatal' condition 'This configuration has not been verified and can not be accepted by the system.' ! interface TenGigE0/7/4/0 bundle id 2 mode active !!% 'CfgMgr' detected the 'fatal' condition 'This configuration has not been verified and can not be accepted by the system.' ! end
NSR not ready due to BGP AD LDP session not sync'ed
Status:
Fixed
Severity:
2 Severe
Description:
Symptom: Customer may not get L2VPN NSR ready post RP failover. This is preventing the entire route to declare NSR readiness. The problem can also be seen by doing a process restart of mpls_ldp process on standby RP.
Conditions: The problem is seen with BGP AD with LDP signaling feature where there is only BGP AD PW on corresponding LDP session i.e no extra manual PW is configured over that same LDP session. That LDP session may be shared with L3 features.
The problem may happen on RP Failover or on mpls_ldp process restart on standby RP with above conditions.
Workaround: No potential workaround
Further Problem Description: The problem may happen on standby RP where L2VPN synchronization messages are dropped due to an IPC connection being down between L2VPN and mpls_ldp processes.
The AMI BITS(E1/T1/J1) Modes are not working in sync-E feature
Status:
Open
Severity:
2 Severe
Description:
Symptom: Existing NCS4K-RP hardware versions (800-39505-04 and below) do not support BITS AMI mode for T1/E1/J1. BITS interfaces will not be functional when configured in AMI mode of operation.
Example config:
Example configuration:
clock-interface Bits0-In port-parameters etsi bits-input e1 fas ami ! frequency synchronization selection input wait-to-restore 1 quality receive highest itu-t option 1 PRC ! ! clock-interface Bits0-Out port-parameters etsi bits-output e1 fas ami ! frequency synchronization quality transmit exact itu-t option 1 SSU-B ! ! frequency synchronization quality itu-t option 1 clock-interface timing-mode system !
Example output on problematic cards (Sync0 showing DOWN): RP/0/RP0:ios#sh frequency synchronization clock-interfaces Wed Mar 25 00:33:55.938 UTC Node 0/RP0: ============== Clock interface Sync0 (Down: NONE) Assigned as input for selection Wait-to-restore time 1 minute SSM supported and enabled Input: Down Last received QL: None Effective QL: Failed, Priority: 100, Time-of-day Priority 100 Supports frequency Output is disabled Next selection points: T0_SEL
Conditions: When customer intends to use BITS interface in NCS4K platform in AMI mode
Workaround: Workaroud: NONE Fix: NCS4K-RP cards with version (800-39505-04 or lower) will need to be replaced with new version hardware (800-39505-05 or higher) if BITS AMI mode is intended to be used.
Follow recommendations provided as per Cisco FN 64142 as below: AMI mode for BITS is not used by widely by all customers. The customers are advised to check if this feature is being used in their network. If not used, customers need not require replacements of the existing NCS4K-RP.
Further Problem Description: Problem details including example configuration CLIs and the output is given in the Symptoms field. Customers can check the hardware versions to identify whether they are affected or not. Alternately, they can also perform loopback on ETSI BITS-0 port (InOut) on ECU using 75Ohms BNC cable and apply example config. Problematic cards show Sync0 as Down.
IOS XR not rejecting faulty configuration for MPLS LDP Advertise
Status:
Fixed
Severity:
2 Severe
Description:
Symptom: Traffic was blackholing due to the addition and removal of MPLS LDP config/label advertisement.
Conditions: When using MPLS Advertise labels such as ACL for Loopback and/or adding virtual interfaces such as TE, code accepts this configuration. But when removing TE interfaces, behavior does not dynamically update to just use ACL for loopback thereby causing blackhole of traffic.
CRS Stopped Working - Protocols flap- rdsfs_svr crashes, nrs
Status:
Fixed
Severity:
2 Severe
Description:
Symptom: CRS stopped forwarding traffic, OSPF and LDP flapping, CRS router very slow even executing regular show commands takes minutes to execute. Error messages in the log related to rdsfs_svr. Standby RP is not Ready.
Symptom:Memory leak in dhcpd. Process crash observed when dynamic memory limit is hit. Due to memory leak there may be one of the following side-effects:- 1. DHCP process crash on active-rp due to malloc failures 2. Max-memory hit (signal-31) crash 3. If checkpoint related calls fails (in a malloc), it may corrupt the checkpoint table and the resultant process restart (crash) will flush all the bindings !! 4. It may have some impact (dhcpd assert) on standby-rp, in checkpoint handling due to checkpoint data out of sync with active-rp.
Conditions:XR release 5.2.2 and later: router configured for dhcp proxy. XR releases 5.1.1, 5.1.2, 5.1.3 and 5.2.0: router configured for dhcp 'lease-proxy'.
Workaround:If dhcpd process has crashed, router reload may be required to clean up the checkpoint database. After that proactive restart of the dhcp process successfully clears the leak. Pre-522, disabling 'dhcp-lease-proxy' on proxy profiles & clearing the older bindings will avoid the leak. From 522, leak cannot be avoided via any configuration change - only proactive process restart will help.
More Info:Run ' sh processes memory detail | i "^JID|dhcpd" ' to check the dynamic memory limit of dhcpd process.
8KB arenas created in sysdb_shared_sc causing fragmentation
Status:
Fixed
Severity:
2 Severe
Description:
Symptom: 8KB arenas created in sysdb_shared_sc causing fragmentation and crash
Conditions: Some application uses malloc_opt to change to have larger arena size. Arena can be reduced to smaller size which can have impact to fragmentation.
Workaround: Process restart typically can help reducing fragmentation
Further Problem Description: This change is to change the size to malloc_opt specified minimum arena size. The behavior without this ddts change has been in previous releases for many many years without nay issue. This is unlikely to cause fragmentation issue bad enough to cause harm but rather this ddts is just to address the proper or expected behavior when using this malloc opt. This ddts is unlikely to cause any impact to router and I don't think customers need to worry about it. In other words, SMU for customers is not necessary.
IPARM cb for IP Addr notification on LC is inconsistent for current upd
Status:
Fixed
Severity:
2 Severe
Description:
Symptom: Client Applications running on LCs May not receive IP Address Callbacks/updates in the following cases: 1) multiple Registrations/Un registrations 2) client Apps configured with Dual stack 3) Loopback delete and configure
Deletion of packet interface happens post to LC reload.
Status:
Fixed
Severity:
2 Severe
Description:
Symptom: Ethernet Packet interfaces get deleted upon RP reload
Conditions: Configure Ethernet Packet interfaces and reload the active RP
Workaround: NONE
Further Problem Description: 1. Configure 1 L2 xconnect & bring up the traffic 2. Reload active RP VM & after this wait till redundancy to get established again & traffic flow starts . sysadmin-vm:0_RP0# sdr default-sdr location 0/RP0/VM1 reload Sat Oct 17 07:59:58.162 UTC
RP/0/RP1:ios#show redundancy summary Fri Oct 16 09:09:08.262 UTC Active Node Standby Node ----------- ------------ 0/RP1 0/RP0 (Node Ready, NSR:Not Configured) 0/LC0 0/LC1 (Node Ready, NSR:Not Configured)
3. Reload active LC >>>> after this all the created packet interface are deleted resulting into complete traffic drop even though both RP &LC vm redundancy is UP.
RP/0/RP1:ios#show interfaces brief Fri Oct 16 09:27:12.052 UTC
Intf Intf LineP Encap MTU BW Name State State Type (byte) (Kbps) -------------------------------------------------------------------------------- Nu0 up up Null 1500 0 Mg0/RP0/CPU0/0 admin-down admin-down ARPA 1514 10000000 Mg0/RP1/CPU0/0 admin-down admin-down ARPA 1514 1000000
RP/0/RP1:ios#show l2vpn xconnect Fri Oct 16 09:28:26.161 UTC Legend: ST = State, UP = Up, DN = Down, AD = Admin Down, UR = Unresolved, SB = Standby, SR = Standby Ready, (PP) = Partially Programmed
XConnect Segment 1 Segment 2 Group Name ST Description ST Description ST ------------------------ ----------------------------- ----------------------------- 1 1 UP Te0/2/0/3 UP Te0/2/0/4 UP ----------------------------------------------------------------------------------------
Symptom: After installing new software, The FPGA devices won't get reset, and old DP-FPGA will continue to run.
Conditions: when we do fresh installation on box with Slice configured. The FPGA will be running with old slice configuration. It may require re-provision post installation.
Workaround: do 'upgrade hw-module slice <> re-provision' in case we see NEED UPG on fresh installation.
Symptom: Currently ECU Disk presence is monitored by mediasvr, receiving notification from hushd when mediasvr process starts. This enhancement will ensure backplane driver sends notification to mediasvr when ECU Disk is OIR'ed and mediasvr can go onto update the status in inventory
Disable ASPM (Active State Power Management) on CRS-X Line Cards
Status:
Fixed
Severity:
2 Severe
Description: *
Symptom: On the FCC based Shelf Controller, PCIe link errors were observed with the ASPM power management feature enabled. Based on the discussion with the chip vendor, the ASPM feature on the CRS-X line card is proactively removed to avoid any PCIe link errors. Though link errors were not observed on CRS-X cards, this a proactive fix.
Configuration is changed to "no shutdown" unexpectedly after reload.
Status:
Fixed
Severity: *
2 Severe
Description:
Symptom: it has seen abnormal behavior(be changed IF configuration from shutdown to no shut) after reload. We also reproduced same issue in their lab. Using versions are 4.2.1 and 4.3.4. This issue has appeared with both of versions.
Conditions: none
Workaround: WA steps. **no interface XX -> IF shutdown(*1)->LC OIR ->reconfiguration on I/F -> Reload
/// (*1)config "shutdown" and then commit. interface TenGigE0/x/x/x shutdown ///
Symptom: Error 1: In function 'osc_memcpy', inlined from 'fr_netio_bind_protocol' at fr/netio/src/fr_netio.c:816:13: ./infra/rtd/osc/bosc/export/isan/include/osc_map.h:64:13: error: call to 'osc_compile_check_memcpy' declared with attribute error: Compiler Assertion: memcpy len will always overflow dst object size vunerable code: memcpy(&pdb->pinfo.u, pinfo, sizeof(fr_netio_protocol_info_type));
LC CPU hog due to TCAM error correction to invalid address
Status:
Fixed
Severity:
2 Severe
Description: *
Symptom: CPU Hog is seen in 40G LC CPU. Many processes are blocked on tcam_mgr.
Conditions: The problem may be observed on CRS-MSC-B and CRS-FP40 if tcam parity errors are triggered.
Workaround: There is no workaround available. Recovery: Reload the linecard.
More Info: This defect causes tcam_mgr process to stuck in a loop trying to fix parity errors on the tcam. Consequently all processes, which write into tcam are blocked on tcam_mgr. Potential side effects are traffic drops leading to black-holing.
Symptom: We only took the backup of the node VSO5 and Restored the configuration.
While restore the config we found that wavelength configuration of DWDM controllers were not committed as per below erros's .
So we manually configured this wavelength as per below steps. 1. Shut the optics controller 2. Commit 3. Wavelength cli configuration 4. Commit 5. No shut 6. Commit
Wavelengths cannot be changed because controller was not in shutdown mode.
NCS6k : nsr ready takes 30 minutes after standby RP xr vm reload
Status:
Fixed
Severity:
2 Severe
Description:
Symptom: NSR ready for some components might be delayed in some cases.
Conditions: When OSPF is configured with UCMP and Fast-Reroute features both, then in certain rare conditions OSPF misses to send to RIB the complete notification at the end of its backup route computations.
There is no issue with forwarding functionality.
Workaround: None required - the next SPF or route change will recover from the condition and if not RIB will force recovery after 30 minutes.
Further Problem Description: This is a very rare condition and combination of route updates. It is applicable to all XR platforms and releases since 4.3.1 when UCMP feature was added.
Te_control and mpls_lsd in mutex lock after RP and LC VM switchover
Status:
Terminated
Severity:
2 Severe
Description: *
Symptom: Issue ====== TE_control and mpls_lsd in mutex lock after RP and LC VM switchover
Conditions: Steps to Reproduce ================== 1) Create 2 1+1 BDIR tunnels and verify traffic 2) Now do RP and LC VM switchover ad wait for redundancy to come up 3) Now once the redundancy is up check the tunnel state after swithover using ?show mpls traffic eng tunnel? CLI 4) The CLI is hanged again gives error ?Failed to retrieve data from TE process: path affinity_map, error Device or resource busy (0x10)?
Interlaken Stats are not getting updated though there is traffic flowing
Status:
Fixed
Severity:
2 Severe
Description:
Symptom: Interlaken Stats are not getting updated though there is traffic flowing
Conditions: Interlaken Stats are not getting updated though there is traffic flowing
Workaround:
Further Problem Description: On 29C of 6.0.1 image , with traffic flowing across inter LC (CPAK) , Interlaken Stats are not getting updated . Tx & Rx fields are shown as 0.
RP/0/RP0:ancalgon_it-r1#show digi-framer slot 13 ilkn-count location 0/LC0 Wed Feb 10 18:23:25.911 IST +-----------------------------------------------------------------------------------------------------------------------------------+ DIR ---> DIRECTION TO MOINTOR RX or TX BCD ---> BYTE COUNT DATA PCD ---> PACKET COUNT DATA ECD ---> ERROR COUNT DATA BMxECD ---> BURST MAX ERROR COUNT DATA BMnECD ---> BURST MIN ERROR COUNT DATA BCS ---> BYTE COUNTER STATUS PCS ---> PACKET COUNTER STATUS ECS ---> ERROR COUNTER STATUS BMxCS ---> BURST MAX COUNTER STATUS BMnCS ---> BURST MIN COUNTER STATUS
LC's going through warm reset upon power cycle/reload all
Status:
Fixed
Severity:
2 Severe
Description:
Symptom: LC's going through warm reset upon power cycle/reload all or reload rack execution, leading to problems in LC power on sequence. This can cause LC's to remain in power off/present state etc.
Conditions: Upon execution of power cycle or hw-module location all reload or reload rack
Workaround: none
Further Problem Description: LC's going through warm reset upon power cycle/reload all or reload rack execution. This will cause CCC FSM to break, leading to cards remain in power OFF state or present state etc.
Symptom: Process l2vpn_mgr exited with signal 11 (SEGV - Segmentation Fault) @l2vpn_fwd_get_l2fib_pw_load_balancing
Conditions: There was a changes made to the l2vpn configuration right prior to the fault here. However, the issue was not reproduced in-house with simply this configuration change. RP/0/RSP0/CPU0:r1.nik.ams# show configuration commit changes 1000000553 Fri Mar 25 16:21:08.596 MET Building configuration... !! IOS XR Configuration 5.1.1 l2vpn no bridge group TMG bridge group TMG no bridge-domain TMG-VLAN21 bridge-domain TMG-VLAN21 no neighbor 10.10.10.3 pw-id 21 neighbor 10.10.10.3 pw-id 21 no backup neighbor 10.10.10.4 pw-id 21 ! no routed interface BVI21 ! ! ! end
Symptom: An intermittent crash in the telemetry encoder process is observed. The stack trace shows that this happens at schema_class_get_category in the MDA library, due to memory corruption.
Conditions: RootOper.InfraStatistics.InterfaceTable.Interface(*).Latest.GenericCounters is queried for.
Symptom: On a scaled IOS-XR router (with large ipv6 interface scale, ipv6_nd process can remain blocked on gsp process after restart of ipv6_ma. This problem occurs only after restart of ipv6_ma process
Conditions: While the exact scale of ipv6 interface config that can bring out this error condition has not been validated, problem condition is hit only when gsp transport queue for ipv6_ma is full and at that same time ipv6_ma process is restarted. This specific issue is seen with 5.3.3 release, but same condition can occur in earlier releases also.
Workaround: Another restart of ipv6_ma process has been shown to recover from this issue
没有评论:
发表评论