Cisco Blog » The Platform

2015年7月1日星期三

Cisco Notification Alert -Cisco Optical - CPT OTN 50 200 600-01-Jul-2015 16:49 GMT

 

 

 

 

 

 

 


Known Bugs - Carrier Packet Transport (CPT) System

Bug Id:
CSCuu30494
Title:
MEA Alarm observed in OTN port After the upgrade 9.703 to 9.704(Build17)
Description:

Symptom:
MEA Alarm observed in OTN ports After the upgrade 9.703 to 9.704

Conditions:
1. Had a 12 node GNE - ENE setup with scale config (2 GNE - 10 ENE)
2. Performed the upgrade from 9.703 to 9.704 (Build 17)
3. Observed MEA alarms on all OTN ports on all the 12 nodes

Workaround:

Further Problem Description:

Status:
Fixed
Severity:
2 Severe
Last Modified:
03-JUN-2015
Known Affected Releases:
9.704
Known Fixed Releases:
Bug Id:
CSCut27201
Title:
Egress policing counters are not working correctly on all the queues
Description:

Symptom:
Egress policing counters are not working correctly on all queue

Conditions:
configure egress policy on an interface and issue show policy-map inter

Workaround:

Further Problem Description:

Status:
Open
Severity:
2 Severe
Last Modified:
04-JUN-2015
Known Affected Releases:
9.701
Known Fixed Releases:
Bug Id:
CSCuu55217
Title:
Bi-directional Traffic Hit on Create & Delete or VLAN EDIT of VPWS
Description:

Symptom:
Bi-directional Traffic Hit on Create & Delete of VPWS or during VLAN EDIT in VPWS

Conditions:
1. Created a VPWS over an unprotected TP with VLAN Single Tag (104)
2. After verifying Traffic, Edited the VLAN on both ends to Double Tag Range (outer Tag: 95-100, Inner tag: 103)
3. After Verifying traffic, again edited the VLAN on both ends to Double tag List (outer tag: 95,97, 99, 100, 101, 103,104 Inner tag: 103)
4. After verifying traffic, edited the VLAN again to Single tag Range (95,97, 99, 100, 101, 103,104 ) on both ends.
5. While verifying the traffic, found the traffic hit (stopped) on both directions.

Scenario 2:

The same issue is also seen while doing Create & Delete of VPWS in the same order as above instead of VLAN EDIT.

Workaround:
None

Further Problem Description:
When the service has double tag list with range(eg 11,13,15-17,19) is deleted,hal_mpls_local_service_destroy_otag_itag_list()
will call bcm api to delete BCM entry for each vlan in the list. for the range(15-17) in our example will try to delete bcm entry
using bcm_port_match_delete() and bcm_vlan_translate_action_range_delete(). since first call itself is deleting the bcm entry,
bcm_vlan_translate_action_range_delete is returning -7 as error. As a result it is returning from this function leaving out remaining vlan entries as stale.

Status:
Fixed
Severity:
2 Severe
Last Modified:
04-JUN-2015
Known Affected Releases:
9.704
Known Fixed Releases:
Bug Id:
CSCut31736
Title:
Partial Ring Config Present in Node after deletion of Ring in CTC
Description:

Symptom:
Partial Ring Config Present in Node after deletion of Ring in CTC

Conditions:
??? Created DH Rings using Ports 4/1 in WRC and 5/1 in PRC
??? While WRC 4th card is ACTIVE & 5th card in PRC is Stand-by.
??? After REP is converged at both the ends completely, Deleted the PBR & Rings.
??? Checked in both WRC & PRC for REP deletion in IOS using Show REP Topology command, and made sure about the deletion.
??? After confirming Ring Deletion as mentioned above, reloaded 4th card in WRC.
??? Once 5th card in WRC came as Active, checked for the REP topology and found Partial Ring topology is available for WRC end.

Workaround:
Reload the standby PTF after deleting the ring.

Further Problem Description:

Status:
Fixed
Severity:
2 Severe
Last Modified:
15-JUN-2015
Known Affected Releases:
10.20, 9.702
Known Fixed Releases:
Bug Id:
CSCuu39320
Title:
Error thrown BD max resource limit is reached while creating EVC
Description:

Symptom:
Error thrown BD max resource limit is reached while creating EVC

Conditions:

Workaround:
Do the PTF redundancy reload

Further Problem Description:

Status:
Open
Severity:
2 Severe
Last Modified:
15-JUN-2015
Known Affected Releases:
9.704
Known Fixed Releases:
Bug Id:
CSCur18714
Title:
CTC is not allowing to delete H-VPLS neighbor from WRC node to PRC node
Description:

Symptom:
CTC is not allowing to delete H-VPLS neighbor from WRC node to PRC node and visa versa.

Conditions:
please refer summary

Workaround:

Further Problem Description:

Status:
Terminated
Severity:
2 Severe
Last Modified:
15-JUN-2015
Known Affected Releases:
9.701
Known Fixed Releases:
Bug Id:
CSCuu08563
Title:
One Way VPLS Traffic Hit while PTF Reload - 9.703
Description:

Symptom:
One Way VPLS Traffic Hit while PTF Reload - 9.703

Conditions:
??? Created a VPLS from 36/25 of Node 186 to 36/26 of Node 150 using the existing TP created between Nodes via 5/4.
??? Pumped unlearned traffic and Verified using spirent.
??? Reloaded PTF card on Node 186 which has single PTF
??? Once the Node & FOG PB came up, the traffic is resumed only on one end (Node 150) and not resumed on another end (186).

Workaround:

Further Problem Description:

Status:
Other
Severity:
2 Severe
Last Modified:
15-JUN-2015
Known Affected Releases:
9.703
Known Fixed Releases:
Bug Id:
CSCuu02051
Title:
Traffic dropped for unlearned P2MP(VPLS)traffic with 2 or more fog links
Description:

Symptom:
Traffic droped for unlearned P2MP(VPLS) traffic when we use 2 or more fog links

Conditions:
1. In One of the node (10.64.107.61) PB is fanned out from all four ports of CARD 6 (6/1,6/2,6/3 &6/4)
2. Had few service including VPWS,VPLS running from PB---->Node 253------>node63------->Node-------186-------Node61---PB
3. Observed when the 6/1 is down traffic is still transmitting on the same port(hg1) for the unlearned traffic intead switching to other ports

Workaround:
Send the learned traffic. i.e configure static MAC entries.

Further Problem Description:

Status:
Other
Severity:
2 Severe
Last Modified:
15-JUN-2015
Known Affected Releases:
9.703
Known Fixed Releases:
Bug Id:
CSCur23671
Title:
PTF/PTM Ports are flapping during dual TNC hardreset
Description:

Symptom:
PTF/PTM Ports are flapping during dual TNC hardreset, due to which PB is going for reload.

Conditions:
Steps to Reproduce
==================
1. All the 12-Nodes got upgraded to 9.701(Neptune_Build_152) in CTC mode and services,traffic were up.
2. Did Dual TNC hardreset in one GNE Node. Its fully loaded Node.
3. Observed ports are flapping in PTF/PTM, and PB which are connected to those ports are going for reload as the port flaps.
Node had one TNC card and one TNC-E card.

Workaround:
None

Further Problem Description:

Status:
Fixed
Severity:
2 Severe
Last Modified:
16-JUN-2015
Known Affected Releases:
9.701, 9.702
Known Fixed Releases:
Bug Id:
CSCut15153
Title:
LACP pkts not forwarded on CPT50 ports with L2pt-lacp forward config
Description:

Symptom:
LACP pkts not forwarded on CPT50 ports with l2pt-lacp forward configuration

Conditions:
LACP packets not forwarded on CPT50 interfaces in 9702 under the following scenarios

1. Node reset (CPT50 fanout from PTF or PTM)
2. After upgrade incase of CPT50 fan-out from PTF

Workaround:
CPT50 reset

Further Problem Description:
None

Status:
Fixed
Severity:
2 Severe
Last Modified:
16-JUN-2015
Known Affected Releases:
9.702
Known Fixed Releases:
Bug Id:
CSCut88625
Title:
IPC communication failure observed on FOG deletion
Description:

Symptom:
CPT-50 goes for a reboot and IPC comuunication fails between Uplink Card and CPT-50

Conditions:
Provisioning/Pre-provisioning a CPT-50 on the PTM card when a CPT-50 is already provisioned on the same card.

Workaround:
Recreating deleted Fog

Further Problem Description:

Status:
Fixed
Severity:
2 Severe
Last Modified:
16-JUN-2015
Known Affected Releases:
10.20, 9.70, 9.703
Known Fixed Releases:
Bug Id:
CSCut44996
Title:
CPT600 Intermittent PTS FAIL alarms on extensive SNMP polling on PTF
Description:

Symptom:
PTS FAIL alarms observed on several nodes across the network.

Conditions:
Frequent SNMP polling done on the PTF cards

Workaround:
None

Further Problem Description:

Status:
Fixed
Severity:
2 Severe
Last Modified:
16-JUN-2015
Known Affected Releases:
9.537
Known Fixed Releases:
Bug Id:
CSCuu04363
Title:
Traceback observed in CPT-50 while stimulating SDP Timeout
Description:

Symptom:
Traceback observed in CPT-50 while stimulating SDP Timeout

Conditions:
1. Created a EVC between the FOG PB from Node 150 to Node 186
2. Started Traffic from Spirent Port (Rate: 10% Port based)
3. Generated SDP Timeout in CPT-50 using the POKE Command and observed the mentioned Traceback as attached.
4. Also the same trace back is also seen while recovering back the CPT-50 from SDP Timeout Situation using the POKE command

Workaround:

Further Problem Description:

Status:
Fixed
Severity:
2 Severe
Last Modified:
16-JUN-2015
Known Affected Releases:
9.703
Known Fixed Releases:
Bug Id:
CSCuu57524
Title:
On dual TNC's removed and inserted, PTF hanged and front ports flapped
Description:

Symptom:
When both the TNC's are removed and inserted back, sby PTF hanged and front ports of PTF/PTM flapped.

Conditions:
Both the controller cards has to be removed and inserted back.

Workaround:
For port flap - None
PTF hang - "collaHardResetCard" of particular card from controller card vxworks shell.

Further Problem Description:

Status:
Fixed
Severity:
2 Severe
Last Modified:
16-JUN-2015
Known Affected Releases:
9.701, 9.702
Known Fixed Releases:
Bug Id:
CSCut91278
Title:
DB loss observed while deleting PPM where TE-Link is created
Description:

Symptom:
DB loss observed while deleting PPM where TE-Link is created

Conditions:
1. Create PPM and make it admin up
2. Create TE-Link over the port
3. Delete the PPM
4. Perform Dual pTF reset

Workaround:
NA

Further Problem Description:

Status:
Fixed
Severity:
2 Severe
Last Modified:
16-JUN-2015
Known Affected Releases:
10.20, 9.705
Known Fixed Releases:
Bug Id:
CSCuu34630
Title:
DB loss observed while modifying the existing class map
Description:

Symptom:
DB loss observed while modifying the existing class map

Conditions:
DB loss observed while modifying the existing class map

Workaround:
NA

Further Problem Description:

Status:
Fixed
Severity:
2 Severe
Last Modified:
16-JUN-2015
Known Affected Releases:
9.705
Known Fixed Releases:
Bug Id:
CSCuu00296
Title:
EFP state down in CTC is not blocking traffic after In-Service VLAN Edit
Description:

Symptom:
Issue:
EFP state down in CTC is not blocking traffic after In-Service VLAN Edit

Problem description:
On making EFP state down through CTC blocks the traffic initially. After editing the vlan services of EFP, even though the EFP state is in down, it's not blocking the traffic.

Conditions:
Topology :

Node 32 ---------Node 77

Build : Prayag-Build-40

Reproducibility : 3/3

Steps to reproduce :

1.Create MPLS TP-25 between Nodes 32 and 77
2.Configure Static VPLS-25 between the nodes with VC id 25
3.Add Endpoint EFP's with single tag and verify bidirectional traffic
4.Make the EFP state of Node 32 down and observed that the traffic was blocked
5.Now perform the following In-Service vlan edit operations of Node 32 ? Single tag->Single tag pop1->Single tag pop2-> Untag->Double tag ->Single tag
6.Change the EFP state of Node 32 to down and up for three times
7.At last, make the EFP state of both the nodes to down and observed that the traffic was still flowing

Workaround:
Perform redundancy reload shelf on the nodes

Further Problem Description:

Status:
Fixed
Severity:
2 Severe
Last Modified:
16-JUN-2015
Known Affected Releases:
10.20, 9.705
Known Fixed Releases:
Bug Id:
CSCuu35080
Title:
TNC crash observed while deleting Slot 7 PTM
Description:

Symptom:
TNC crash observed while deleting Slot 7 PTM

Conditions:
TNC crash observed while deleting Slot 7 PTM over fully loaded node

Workaround:
NA

Further Problem Description:

Status:
Fixed
Severity:
2 Severe
Last Modified:
18-JUN-2015
Known Affected Releases:
9.705
Known Fixed Releases:
Bug Id:
CSCus46663
Title:
Eqpt Failure Alarm observed after the upgrade from 9.536+ to 9.702
Description:

Symptom:
Eqpt Failure Alarm observed after the upgrade from 9.536+ to 9.702

Conditions:
1. Had a 12 node GNE - ENE setup with scale config (2 GNE - 10 ENE)
2. Performed the Upgrade from 9.536 + to 9.702
3. Observed EPT failure alarm on one of the node on the TSC-E card (Slot 1)

Workaround:
Reboot the TNC

Further Problem Description:

Status:
Other
Severity:
2 Severe
Last Modified:
23-JUN-2015
Known Affected Releases:
9.702
Known Fixed Releases:
Bug Id:
CSCuu51353
Title:
Observed BFD DOWN post upgraded to R9.703 from R9.53
Description:

Symptom:
BFD stays DOWN post upgraded to R9.703

Conditions:
Upgraded to R9.703

Workaround:
Tunnel Bounce.

Further Problem Description:
Please refer summary.

Status:
Fixed
Severity:
2 Severe
Last Modified:
23-JUN-2015
Known Affected Releases:
9.703
Known Fixed Releases:
Bug Id:
CSCuu59361
Title:
TNC is going for reset upon EFP SPAN addition and deletion
Description:

Symptom:
TNC card is going for reset upon EFP SPAN addition and deletion

Conditions:
Steps
===========
1.Created P2MP EVC in CPT-50, added EFPs 56/1 and 56/30
2.Configured EFP SPAN and added three ports as source and added 56/48 and destination port.
3.Tried deleting the already added source port.
Observed WRC TNC{Its single TNC Node} went for reset.

Workaround:
None

Further Problem Description:

Status:
Fixed
Severity:
2 Severe
Last Modified:
24-JUN-2015
Known Affected Releases:
9.701, 9.702, 9.705
Known Fixed Releases:
Bug Id:
CSCuu93784
Title:
Tunnels Goes Down & Came UP on FOG Deletion
Description:

Symptom:
Tunnels Goes Down & Came UP on FOG Deletion whne FOG and Core port of TP on same card (PTF)

Conditions:
1) Using the Scale DB with 170 Unprotected Tunnels between 5/4 as core port on both Nodes .
2) Created a FOG with 2 links from 5/1 & 5/2 of Node 114.
3) Once FOG is UP, deleted the FOG from Node 114.
4) Noticed all the TP on 5/4 goes & comes up once or twice.

The above issue will be seen even if the FOG is created with One link on the same card of the Core port.

Workaround:

Further Problem Description:

Status:
Open
Severity:
2 Severe
Last Modified:
27-JUN-2015
Known Affected Releases:
9.705
Known Fixed Releases:
Bug Id:
CSCuu69989
Title:
OTN Port going down & coming up on TSC reset
Description:

Symptom:
OTN Port going down & coming up on TSC reset

Conditions:
1) Created a tunnel between 4/4 on both Nodes.
2) Created a VPWS over it.
3) Created CFM & Y1731 over the above VPWS service
4) Reload all TSC's in WRC/PRC.

The above issue will be seen even without any service, and if the port is OTN UP.

Workaround:

Further Problem Description:

Status:
Open
Severity:
2 Severe
Last Modified:
29-JUN-2015
Known Affected Releases:
9.53, 9.705
Known Fixed Releases:
Bug Id:
CSCuu56988
Title:
Loss of management access with CPT node with act TSC-E autoreset
Description:

Symptom:
Loss of management access with CPT node on act TSC-E autoreset

Conditions:
1. Two TSC-E cards in slot-1 and slot-8
2. Active TSC-E went for autoreset due to unknown reason. Stuck in failed state with RED LED.
3. New active TSC-E did not took over completely and not responding for mgmt. access, even though card was showing as "ACTIVE"

Workaround:
Hard reset of failed TSC-E card (or) collaHardreset of failed TSC-E card incase if telnet to new active is possible in any mean,

Further Problem Description:

Status:
Open
Severity:
2 Severe
Last Modified:
29-JUN-2015
Known Affected Releases:
9.702
Known Fixed Releases:
Bug Id:
CSCus96249
Title:
SDP TX packets drop in PB sirius.
Description:

Symptom:
SDP TX packets drop in PB sirius.

Conditions:
Problem Statement / SYMPTOM:
**************************
Setup & configuration details.

5/3 --link 1--5/3
Ixia[7/10]-[36/25] PB [36/45]-[5/2]-[Node186]- -[Node150] -[5/2]-[36/45] PB [36/25]--Ixia[7/9]
5/4--link 2---5/4


1. 2 M6 node each having 2 PTFs and 2 PBs fan out from slot 5.
2. 200 TP tunnels with 4msec configured between link 1
3. 200 TP tunnels with 4msec configured between link 2
4. Created 1 VPWS circuit between PBs and sent 100% bidirectional traffic
5. Shut/Un shut 5/3 core port and Observed traffic is not flowing end to end.
6. But PW ping is successful.
7. In present scenario, slot 5 is in SBY state in both nodes.

Workaround:
reload PB

Further Problem Description:

Status:
Fixed
Severity:
2 Severe
Last Modified:
30-JUN-2015
Known Affected Releases:
9.538, 9.702
Known Fixed Releases:
Bug Id:
CSCus98539
Title:
TNC is going to Fail state
Description:

Symptom:
TNC is going to fail State and console is getting hanged

Conditions:
.Dual PTF card Hard reset.
2.TNC Hard Reset and Setup is Kept in a idle state for 3 hours.
3.TNC is going to fail state and console also getting hanged.

Workaround:

Further Problem Description:

Status:
Terminated
Severity:
2 Severe
Last Modified:
30-JUN-2015
Known Affected Releases:
9.702
Known Fixed Releases:
Bug Id:
CSCuu40968
Title:
After migration, VPLS traffic dropped due to DMA assertion.
Description:

Symptom:
After upgrade form 9.702FCS to 9.702 dummy build, Traffic not flowing in Ring VPLS circuits. From initial debug we found label that entry is missing from slot 4 for 20 Ring VPLS circuits. Later we found DMA assertion failure in 154-slot4 which is standby now.

Conditions:
Scenario 1:
*********
1. Edit VPLS circuit [VCID 3701] and delete EFP configuration.
2. Create EFP configuration with VLAN 3701 with POP1 and enable IGMP in node 154, node 186 and node 150
3. Send query from Node 150 and report from node 154.
4. Send IGMP traffic from node150 to node 154. Traffic was flowing fine
5. Did WRC node154 slot 5 reset. Now WRC slot 4 is ACT and slot 5 as standby
6. Initiate traffic in all services
7. Perform Dummy build upgrade in all 4 nodes.[ Node 154, 186, 150 and 194]
Scenario 2:
*********
1. Send traffic in all services in 40PBR
2. 40PBR upgrade from 9.702_Build_47 to 9.702 FCS.

Workaround:
1. Created new TP and VPLS on same phy link and observed traffic was flowing fine for new circuits
2. Did clear xconnect for particular issue VPLS circuit and observed that traffic issue was not solved.
3. Initiate Redundancy reloads on PTF and observed traffic got resumed for all 20 issue vpls circuits.

Further Problem Description:
none

Status:
Open
Severity:
2 Severe
Last Modified:
30-JUN-2015
Known Affected Releases:
9.702
Known Fixed Releases:
Bug Id:
CSCuu34920
Title:
PTM reboot due to exception during deleting EFP in specific sequence
Description:

Symptom:
PTM reboot due to exception during deleting EFP in specific sequence

Conditions:
Four CPT50 has to be fanned out from PTM
configure more than one service on CPT50 interface on every CPT50. Use atleast 5 to 6 interfaces.
delete & reconfigure one service on one of the interface of one CPT50.
delete the second service on the same interface of same CPT50.

Workaround:
None

Further Problem Description:

Status:
Fixed
Severity:
2 Severe
Last Modified:
30-JUN-2015
Known Affected Releases:
9.536, 9.537, 9.702
Known Fixed Releases:

Find additional information in Bug Search index.

 

2013 Cisco and/or its affiliates. All rights reserved. Terms & Conditions | Privacy Statement | Cookie Policy | Trademarks

 

没有评论:

发表评论