| |
|
Alert Type: | Updated * |
Bug Id: | CSCut74135 | Title: | Fabricpath mode transit - control packets tagged with internal vlan 4041 |
|
Status: | Fixed |
|
Severity: | 1 Catastrophic |
Description: * | Symptom: On a Nexus 6000/5600 running fabricpath , when fabricpath mode transit is configured, the switch is sending control packets like CDP, LACP, ISIS tagged with internal VLAN ID 4041.
This causes a switch like N7K drop the packet. None of the protocols are able to negotiate and come up
Conditions: Command fabricpath mode transit is configured.
Workaround: Disable transit mode and reload the switch.
Further Problem Description:
|
|
Last Modified: | 20-MAY-2016 |
|
Known Affected Releases: | 7.0(6)N1(0.269), 7.1(1)N1(0.508), 7.2(0)N1(0.147) |
|
Known Fixed Releases: | 7.0(1)ZN(0.780), 7.0(6)N1(1), 7.0(7)ZN(0.156), 7.1(1)N1(0.511), 7.1(1)N1(1), 7.1(1)ZN(0.67), 7.2(0)N1(0.167), 7.2(0)N1(0.180), 7.2(0)N1(1), 7.2(0)ZN(0.170) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCux42280 | Title: | BFD session randomly flaps on N6K |
|
Status: | Fixed |
|
Severity: | 2 Severe |
Description: | Symptom: BFD session randomly flaps on N6K.
Conditions: Multiple BFD sessions, PIM BFD and BGP BFD enabled.
Workaround: None.
Further Problem Description:
|
|
Last Modified: | 05-MAY-2016 |
|
Known Affected Releases: | 7.1(3)N1(1) |
|
Known Fixed Releases: * | 7.1(3)ZN(0.271), 7.1(4)N1(0.805), 7.1(4)N1(1) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCut41811 | Title: | [ILUKA MR5/VF]Service fwm: Fabricpath should be disabled on both vpc |
|
Status: | Open |
|
Severity: | 2 Severe |
Description: * | Symptom: ISSU failed between due to switch collision
Conditions: Performed a ISSU in Fabric path domain
Workaround: No
Further Problem Description:
|
|
Last Modified: | 20-MAY-2016 |
|
Known Affected Releases: | 7.0(6)N1(0.6) |
|
Known Fixed Releases: | |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCux68595 | Title: | FWM crashes while executing "show platform load-balance forwarding-path" |
|
Status: | Fixed |
|
Severity: | 2 Severe |
Description: | Symptom: switch crashes after executing the command 'show platform load-balance forwarding-path interface port-channel "
N6K-957# sho cores Module Instance Process-name PID Date(Year-Month-Day Time) ------ -------- --------------- -------- ------------------------- 1 1 fwm 3719 2011-05-26 02:20:53
Conditions: Execute "no hardware multicast hw-hash" under the port-channel
Workaround: Don't execute "show platform load-balance forwarding-path interface port-channel " after removing hardware multicast hw-hash
Further Problem Description:
|
|
Last Modified: | 31-MAY-2016 |
|
Known Affected Releases: * | 7.0(7)N1(1), 7.2(1)N1(1) |
|
Known Fixed Releases: | 7.2(2)N1(0.393), 7.2(2)N1(1), 7.2(2)ZN(0.75) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCuw95767 | Title: | KK144:FWM Crashed while executing "show tech-support fwm" in scale setup |
|
Status: | Fixed |
|
Severity: | 2 Severe |
Description: | Symptom: Fwm crash while Executing "show tech-support fwm" in the scale test bed
Conditions: scale test bed with portchannels having more than 3k vlan configured.
Workaround: No workaround
Further Problem Description: Buffer overrun happens while trying to print the lif/pif of the switch in the "show plaform fwm info lif all verbose" which is part of "show tech-support fwm" the buffer allocation is hardcoded to 48512bytes in the code. and we are crossing that allocation and creating buffer overrun situation . while freeing the buffer back the mem track tool captures this and asserts the fwm process.
|
|
Last Modified: | 30-MAY-2016 |
|
Known Affected Releases: * | 7.1(4)N1(0.818), 7.3(0)N1(0.204), 7.3(0)ZN(0.144) |
|
Known Fixed Releases: | 7.1(3)ZN(0.293), 7.1(4)N1(0.827), 7.1(4)N1(1), 7.3(0)IZN(0.13), 7.3(0)N1(0.215), 7.3(0)N1(1), 7.3(0)ZN(0.192) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCuz84335 | Title: | Correctable and non-FATAL handling for ES image |
|
Status: | Open |
|
Severity: | 2 Severe |
Description: | Symptom: A Nexus 6001 switch might hang with no response via console, mgmt0 or inband interfaces. In certain NX-OS error messages such as following are printed prior to switch being impacted.
%USER-0-SYSTEM_MSG: 452: PCIe critical FAILURE DETECTED, contact Cisco TAC - pfm %USER-0-SYSTEM_MSG: 453: PCIe critical FAILURE DETECTED, contact Cisco TAC - pfm %USER-0-SYSTEM_MSG: 454: PCIe critical FAILURE DETECTED, contact Cisco TAC - pfm %USER-0-SYSTEM_MSG: 455: PCIe critical FAILURE DETECTED, contact Cisco TAC - pfm
Conditions: Seen in Nexus 6001 platforms
Workaround: The switch will need a BIOS upgrade. Refer to following field notice for details http://www.cisco.com/c/en/us/support/docs/field-notices/641/fn64110.html
Further Problem Description:
|
|
Last Modified: | 28-MAY-2016 |
|
Known Affected Releases: | 6.0(2)N2(7), 7.0(1)N1(1), 7.0(5)N1(1a), 7.0(7)N1(1), 7.1(1)N1(1), 7.1(2)N1(1), 7.2(0)N1(1) |
|
Known Fixed Releases: * | 7.0(7)N1(0.2), 7.0(7)N1(1a) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCuw33676 | Title: | fwm core at fwm_fwim_disassociate_pif_from_pc_int -kk 131 |
|
Status: | Fixed |
|
Severity: | 2 Severe |
Description: * | Symptom: Fwm assert failure !pc_lif->runtime_data.vif_id_allocated . This is seen when the scale configs are applied simultaneously on both the switches in VPC.
Conditions: Seen in the scale test bed with 900 port-channels and 500 SVI's and also having 2lvpc configs. the crash occurs during the bring down of portchannels.
Workaround: Applying Configs one switch at a time will not create this crash.
Further Problem Description: The pd_lif->container_p is having the NUll value for the 2lvpc portchannel. because of this we are skipping the certain routins and getting captured as vif_id_allocated. container_p is used for vlan xlate related values.
|
|
Last Modified: | 24-MAY-2016 |
|
Known Affected Releases: | 7.3(0)N1(0.131) |
|
Known Fixed Releases: * | 7.1(3)ZN(0.289), 7.1(4)N1(0.823), 7.1(4)N1(1) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCuz73919 | Title: | Traffic on FP PVLAN isolated trunk with port-sec dynamic getting dropped |
|
Status: | Fixed |
|
Severity: | 2 Severe |
Description: | Symptom: Traffic on PVLAN isolated trunk port will get dropped in a fabric path setup.
Conditions: vlan should be in fabric path mode and the ingress interface has to be a PVLAN isolated trunk port.
Workaround: None.
Further Problem Description:
|
|
Last Modified: | 31-MAY-2016 |
|
Known Affected Releases: | 7.1(4)N1(0.813) |
|
Known Fixed Releases: * | 7.1(3)ZN(0.296), 7.1(4)N1(0.830), 7.1(4)N1(1) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCut84067 | Title: | FWM core hit in the JANJUC 163 image after moving into maintenance mode |
|
Status: | Fixed |
|
Severity: | 2 Severe |
Description: * | Symptom:FWM process Crash After moving into maintenance mode. Core file generated.
Conditions:- Switch is toggled between maintenance mode and no maintenance mode - Cmd "system mode maintenance" & "no system mode maintenance" - Observed only in the JANJUC 163 image.
Workaround:- No Work around available - Observed only once and not reproducible
More Info:- when the system is moved to the maintenance mode all the interfaces are brought down - After a few seconds crash is observed in the fwm module. - From the core decode it shows there is a access to the null pointer in the acl update part.
|
|
Last Modified: | 20-MAY-2016 |
|
Known Affected Releases: | 7.2(0)N1(0.163) |
|
Known Fixed Releases: | 7.1(3)N1(0.623), 7.1(3)N1(1), 7.1(3)ZN(0.30), 7.3(0)N1(0.80), 7.3(0)N1(1), 7.3(0)ZN(0.78) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCuz59961 | Title: | fwm core after "show int brief | grep "Vlan" | grep "up" | count" cmd |
|
Status: | Open |
|
Severity: | 2 Severe |
Description: * | Symptom: While executing the command "show int brief | grep "Vlan" | grep "up" | count" " n6k box crashes with fwm hap reset.
Conditions: show int brief | grep "Vlan" | grep "up" | count
Workaround: NA
Further Problem Description:
|
|
Last Modified: | 18-MAY-2016 |
|
Known Affected Releases: | 8.3(0)CV(0.426) |
|
Known Fixed Releases: | |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCux95887 | Title: | port-security internal information not cleared on feature de-activation |
|
Status: | Fixed |
|
Severity: | 2 Severe |
Description: * | Symptom: When port-security is disabled from an interface, port-security internal information remains in the system.
Conditions: Port-security enabled and disabled on a port.
Workaround: Reload the FEX of disabled the port-security globally
Further Problem Description:
|
|
Last Modified: | 16-MAY-2016 |
|
Known Affected Releases: | 7.2(1)N1(0.331) |
|
Known Fixed Releases: | 7.1(3)N1(4), 7.1(3)ZN(0.224), 7.1(4)N1(0.764), 7.1(4)N1(1), 7.3(1)N1(0.38), 7.3(1)N1(1) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCut60043 | Title: | N5K/6K - 40G transceivers have delay for link-up on module boot/reload |
|
Status: | Fixed |
|
Severity: | 2 Severe |
Description: | Symptom: On Nexus 6004 chassis or module reload 40g interfaces can take up to 50 minutes to come online and forward traffic.
Conditions: Reloading a chassis or LEM module that contains at least one 40g transceiver in a 6004 chassis
Workaround: None. The interfaces do come up after a while
Further Problem Description:
|
|
Last Modified: | 13-MAY-2016 |
|
Known Affected Releases: | 7.0(2)N1(1), 7.1(0)N1(1) |
|
Known Fixed Releases: * | 7.0(7)ZN(0.273), 7.0(8)N1(0.332), 7.0(8)N1(1), 7.1(3)ZN(0.175), 7.1(3)ZN(0.279), 7.1(4)N1(0.725), 7.1(4)N1(0.813), 7.1(4)N1(1), 7.3(0)BZN(0.47), 7.3(0)N1(0.102) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCue99559 | Title: | N5K/6K: FWM hap reset during ISSU upgrade |
|
Status: | Fixed |
|
Severity: | 2 Severe |
Description: | Symptom: Nexus 5K/6K can crash in FWM process during a non disruptive ISSU
Conditions: Seen during non disruptive ISSU upgrade
Workaround: None
Further Problem Description: The crash is due to stale entries in PSS which could have occurred in the past. Trigger for this is VLAN deletion/addition and execution of 'no feature-set fabricpath" and re-adding it in older code.
|
|
Last Modified: | 13-MAY-2016 |
|
Known Affected Releases: | 7.0(7)N1(1) |
|
Known Fixed Releases: * | 7.0(7)ZN(0.266), 7.0(8)N1(1), 7.1(3)ZN(0.133), 7.1(4)N1(0.698), 7.1(4)N1(1), 7.2(2)N1(0.355), 7.2(2)N1(1), 7.2(2)ZN(0.39), 7.3(0)IZN(0.13), 7.3(0)N1(0.259) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCux14987 | Title: | Nexus 5k/6k crash with "lacp hap reset" |
|
Status: | Fixed |
|
Severity: | 2 Severe |
Description: | Symptom: Specific symptoms are unknown.
Conditions: "lacp" feature must be enabled.
Workaround: Disable "lacp".
Further Problem Description: N/A
|
|
Last Modified: | 11-MAY-2016 |
|
Known Affected Releases: | 7.0(5)N1(1a) |
|
Known Fixed Releases: * | 7.1(3)ZN(0.136), 7.1(4)N1(0.701), 7.1(4)N1(1), 7.2(2)N1(0.355), 7.2(2)N1(1), 7.2(2)ZN(0.39), 7.3(0)IZN(0.13), 7.3(0)N1(0.237), 7.3(0)N1(1), 7.3(0)ZN(0.214) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCux88309 | Title: | KK260:maint mode runn is added to "show runn int <name>" cmd output |
|
Status: | Fixed |
|
Severity: | 3 Moderate |
Description: | Symptom: "show runn mmode" command output is getting added with ""show running-config interface " command output
Conditions: maintenance mode profile should be configured.
Workaround: need to remove maintenance mode profile
Further Problem Description:
|
|
Last Modified: | 11-MAY-2016 |
|
Known Affected Releases: | 7.3(0)N1(0.260) |
|
Known Fixed Releases: * | 7.3(0)IZN(0.13), 7.3(0)N1(0.269), 7.3(0)N1(1), 7.3(0)ZN(0.246), 8.3(0)CV(0.436) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCuy00525 | Title: | fwm hap reset @in fwm_port_vlan_xlate_handle_logical_portup |
|
Status: | Fixed |
|
Severity: | 3 Moderate |
Description: | Symptom: Fex going in offline/online sequence. Sometimes it crashes -fwm hap reset
Conditions: Scaled vlan translation CLIs configured on tiburon based fex NIF port-channel. Add/remove vlan translation commands
Workaround: NA
Further Problem Description:
|
|
Last Modified: | 27-MAY-2016 |
|
Known Affected Releases: | 7.3(0)N1(0.268) |
|
Known Fixed Releases: * | 7.1(3)ZN(0.287), 7.1(4)N1(0.821), 7.1(4)N1(1), 7.2(2)N1(0.434), 7.2(2)N1(1), 7.2(2)ZN(0.142), 7.3(1)N1(0.364), 7.3(1)N1(1) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCux71610 | Title: | Multiple VPC sections are seen after maintenance-mode is configured |
|
Status: | Fixed |
|
Severity: | 3 Moderate |
Description: | Symptom: DCNM is not discovering the interfaces due to multiple sections of the vpc domain
Conditions: router ospf 100 isolate router ospfv3 100 isolate fabricpath domain default isolate vpc domain 100 <<<< this section is seen in "show runn | sec vpc" shutdown
N6K_P1#
Workaround: remove the "vpc domain " configuration from maintenance-mode
Further Problem Description:
|
|
Last Modified: | 11-MAY-2016 |
|
Known Affected Releases: | 7.3(0)N1(0.245) |
|
Known Fixed Releases: * | 7.3(0)IZN(0.13), 7.3(0)N1(0.259), 7.3(0)N1(1), 7.3(0)ZN(0.235), 8.3(0)CV(0.436) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCun50553 | Title: | Higher re-covergence time when VPC+ switch comes back after a reload. |
|
Status: | Fixed |
|
Severity: | 3 Moderate |
Description: * | Symptom: Higher re-covergence time when VPC+ Nexus 600x, 55xx switch comes back after a reload.
Conditions: VPC+ and HSRP is configured on Nexus 600x, 55xx running NX-OS 7.0(0)N1(1). The re-covergence time corresponds to "delay restore interface-vlan" timer in VPC config. Default is 10 seconds and hence by default loss unto 10-12 seconds can be seen.
Workaround: Reduce "delay restore interface-vlan" timer to 1 second.
Further Problem Description:
|
|
Last Modified: | 20-MAY-2016 |
|
Known Affected Releases: | 7.0(0)N1(1.1) |
|
Known Fixed Releases: | 7.3(0)IZN(0.7), 7.3(0)N1(0.160), 7.3(0)N1(1), 7.3(0)ZN(0.148) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCuv35326 | Title: | N6k :: ICMPv6 related to neighbor discovery punted to the CPU |
|
Status: | Fixed |
|
Severity: | 3 Moderate |
Description: * | Symptom: In Nexus5K/6K ICMPv6 packets related to neighbor discovery are punted to the CPU even if there is no SVI configured in vlan where packets are present.
Conditions: Nx-OS version 7.0(0)N1(1) and newer - still to be confirmed
Workaround: If this is causing drops in copp-system-class-arp, ARP policer can be increased.
Further Problem Description:
|
|
Last Modified: | 20-MAY-2016 |
|
Known Affected Releases: | 7.1(1)N1(1) |
|
Known Fixed Releases: | 7.1(3)N1(0.659), 7.1(3)N1(1), 7.1(3)ZN(0.67), 7.2(2)N1(0.389), 7.2(2)N1(1), 7.2(2)ZN(0.71) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCsz97119 | Title: | No syslog generated when power supply status is changed |
|
Status: | Open |
|
Severity: | 3 Moderate |
Description: * | Symptom:
Even though the power supply status changed in the device, DCNM still shows the old status.
Conditions:
For offline changes, DCNM rely on the "accounting log" for config changes, and "system log" for status changes. So, when the power supply status changes in the switch, DCNM expects a syslog to be logged in system log file, so that it will be available in "show log logfile" command response. Since, Nexus 5000 switch does not generate the status change syslog for power supply status change, and FAN status change, DCNM will not be aware of those changes, so it will continue to show the same old status.
Workaround:
Rediscovery the switch in DCNM will solve the problem.
|
|
Last Modified: | 20-MAY-2016 |
|
Known Affected Releases: | 4.1(3)N1(0.129) |
|
Known Fixed Releases: | |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCux51705 | Title: | interface counters stucked in 0 |
|
Status: | Fixed |
|
Severity: | 3 Moderate |
Description: * | Symptom: RX and TX interface bytes counters are stucked in 0. Ethernet1/1 is up RX 49741910854 unicast packets 560907 multicast packets 1682 broadcast packets 49742473435 input packets 0 bytes <<<<<<<<<
TX 58849010426 unicast packets 1027488 multicast packets 3991633 broadcast packets 58854029539 output packets 0 bytes <<<<<<<<<
Conditions: After manual interface flap counters of interface being abnormally high and not increasing anymore:
SWITCH# sh int e1/1 counters detailed Ethernet1/1 Rx Packets: 18446744003658477104 <<<< Rx Bytes: 18446688262119259735 <<<< Tx Bytes: 18446661760268377751 <<<<
SWITCH# sh int Eth 1/1 | i bytes 18446744003658477104 input packets 18446688262119259735 bytes 1967789870 jumbo packets 0 storm suppression bytes 5997769666 output packets 18446661760268377751 bytes SNMP counters also affected and are not incrementing once counters have reached 2^64 (=1,844x10^19)
[15:23 > snmpwalk .1.3.6.1.2.1.2.2.1.10.436666368 = Counter32: 4294967295 .1.3.6.1.2.1.2.2.1.16.436666368 = Counter32: 4294967295
After 5 min : [15:29 > snmpwalk .1.3.6.1.2.1.2.2.1.10.436666368 = Counter32: 4294967295 .1.3.6.1.2.1.2.2.1.16.436666368 = Counter32: 4294967295
Later, clear counters issued for the interfaces. Counters reseted the values but they are now stucked to 0 despite all the traffic flowing through the interface.
Workaround: N\A
Further Problem Description:
|
|
Last Modified: | 20-MAY-2016 |
|
Known Affected Releases: | 7.1(0)N1(0.434), 7.1(0)N1(1), 7.1(2)N1(1) |
|
Known Fixed Releases: | 7.0(7)ZN(0.274), 7.0(8)N1(0.332), 7.0(8)N1(1), 7.1(3)N1(4), 7.1(3)ZN(0.165), 7.1(4)N1(0.719), 7.1(4)N1(1) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCuj22176 | Title: | traffic loss on vPC trunk with 1K vlans after the reload of vPC+ primary |
|
Status: | Fixed |
|
Severity: | 3 Moderate |
Description: * | Symptom: On a Nexus 5000 vPC+ pair, reloading the primary vPC switch causes up to 4-5 seconds of traffic loss during the vPC reconvergence.
Conditions: Nexus 5000 vPC+ pair is part of a FabricPath domain with 1000 FabricPath vlans configured in the vlan database and 1000 vlans permitted on the vPC trunk allowed vlan list.
Workaround: Reducing the number of allowed vlan list on the vPC trunk reduces the traffic loss time.
Further Problem Description: Problem:
MCECM will reinit all vpc's on secondary after reload because the previous_comp_check is failure. This reinit is causing some traffic loss because the remote end may have already started to pump in the traffic when the port are up for the first time.
Fix:
Since MCECM will reinit all vpc's on secondary when they are brought up for the first time after reload, we should fail the first bringup in the bundle_member_bringup sequence before the port is tx & rx enabled if there is a reinit pending. This fix only works for LACP po
|
|
Last Modified: | 20-MAY-2016 |
|
Known Affected Releases: | 5.2(1)N1(5), 6.0(2)N1(2a) |
|
Known Fixed Releases: | 7.0(1)ZN(0.705), 7.0(1)ZN(0.715), 7.0(1)ZN(0.719), 7.0(6)N1(0.210), 7.0(6)N1(0.218), 7.0(6)N1(0.221), 7.0(6)N1(1), 7.1(0)EVN(0.18), 7.1(0)N1(0.344), 7.1(0)N1(0.358) |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCuu27591 | Title: | N5K/6K: VPC learnt MACs are not synched to peer |
|
Status: | Open |
|
Severity: | 3 Moderate |
Description: * | Symptom: vPC learnt MAC addresses are not synched between vPC peers in a Nexus 5K/6K
Conditions: Seen after MAC address table limit has reached and condition cleared.
Workaround: Reload switch.
Further Problem Description:
|
|
Last Modified: | 20-MAY-2016 |
|
Known Affected Releases: | 7.1(1)N1(1) |
|
Known Fixed Releases: | |
|
|
| |
| |
|
Alert Type: | New |
Bug Id: | CSCuz56050 | Title: | FWM L2MP Nexhop Misprogramming causes pkts egress wrong interface |
|
Status: | Terminated |
|
Severity: | 3 Moderate |
Description: | Symptom: - in rare situations, Nexus 5600/6000 platforms may forward unicast packets out of the wrong interface in a fabricpath environment leading to traffic blackholing.
Conditions: - nexus5600/6000 - fabricpath topology - config change or portflap of port-channel towards a nexthop and/or config change of SWID of that nexthop has high probability of being a requirement as well
Workaround: shut/no-shut of the expected egress interface and/or incorrect egress interface should clear the issue. Else shut/no shut of the ingress interface may also resolve.
Further Problem Description: The following information should be gathered prior to any port-flapping if at all possible for TAC analysis. show tech fabricpath isis show tech fabricpath switch-id show tech fabricpath topology show tech fwm tac-pac
|
|
Last Modified: | 07-MAY-2016 |
|
Known Affected Releases: | 7.0(2)N1(1) |
|
Known Fixed Releases: | |
|
|
| |
| |
|
Alert Type: | New |
Bug Id: | CSCuz82217 | Title: | vPC Shutdown causes packet loss of 20-30 second across vPC+ pair |
|
Status: | Open |
|
Severity: | 3 Moderate |
Description: | Symptom: vPC Shutdown (i.e. issuing "shutdown" command under "vpc domain X") causes and outage of 20-30 seconds across the vPC+ pair.
Conditions: This is only observed in NX-OS 7.x, NX-OS 6.x is not affected.
Workaround: None
Further Problem Description: |
|
Last Modified: | 25-MAY-2016 |
|
Known Affected Releases: | 7.0(1)N1(1), 7.0(8)N1(1), 7.1(3)N1(1), 7.3(0)N1(1) |
|
Known Fixed Releases: | |
|
|
| |
| |
|
Alert Type: | New |
Bug Id: | CSCuz84691 | Title: | Allow vlan list per port honored when dot1q auto-config trigger enabled |
|
Status: | Open |
|
Severity: | 3 Moderate |
Description: | Symptom: In DFA or Programmable Fabric with dot1q auto-config trigger supports the option to enable/disable the auto-config trigger per port. When dot1q is disabled per port, but dot1q trigger is globally enabled, then the current behavior is that the allow-vlan list is not honored, even the command is configured. This defect should honor the allow-vlan list when the dot1q trigger is disabled (per port)
Conditions: NX-OS 7.3(0)N1(1) has introduced the option to enable/disable dot1q trigger per port.
Workaround: disable dot1q trigger globally
Further Problem Description: The allow-vlan list configuration is affected when the dot1 auto-config trigger is used, but not with other triggers as vdp, vmtracker or lldp
|
|
Last Modified: | 29-MAY-2016 |
|
Known Affected Releases: | 7.3(0)N1(1) |
|
Known Fixed Releases: | |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCuu14960 | Title: | Static MAC configuration only allows +-1000 characters |
|
Status: | Fixed |
|
Severity: | 3 Moderate |
Description: * | Symptom: Configuration of a static MAC address with a long list of interfaces is cut off after approx. 1000 characters.
Conditions: This is seen when configuring a long command for a static MAC which lists a lot of interfaces.
Workaround: There is no known workaround at this time.
Further Problem Description:
|
|
Last Modified: | 20-MAY-2016 |
|
Known Affected Releases: | 7.0(6)N1(0.7), 7.1(1)N1(0.8) |
|
Known Fixed Releases: | 7.1(3)N1(0.619), 7.1(3)N1(1), 7.1(3)ZN(0.26), 7.3(0)ZN(0.68) |
|
|
| |
| |
|
Alert Type: | New |
Bug Id: | CSCuq79053 | Title: | "show interface description" does not show SVI information |
|
Status: | Open |
|
Severity: | 4 Minor |
Description: | Symptom: show interface description doesn't display SVI related info.
Conditions:
Workaround:
Further Problem Description:
|
|
Last Modified: | 20-MAY-2016 |
|
Known Affected Releases: | 7.0(1)N1(3) |
|
Known Fixed Releases: | |
|
|
| |
| |
|
Alert Type: | New |
Bug Id: | CSCuw51765 | Title: | Nexus inconsistent "show interface port-channel # status" command output |
|
Status: | Open |
|
Severity: | 4 Minor |
Description: | Symptom: In 56xx and 60xx boxes, show interface port-channel status CLI is not supported
Conditions: show interface port-channel status as available in N7k and N3K is not seen in N5k/6k.
Workaround: Currently only show interface port-channel brief can be used for basic troubleshooting.
Further Problem Description:
|
|
Last Modified: | 19-MAY-2016 |
|
Known Affected Releases: | 7.2(0)N1(1), 7.2(1)N1(0.242), 7.2(1)N1(0.294), 7.2(1)N1(0.313) |
|
Known Fixed Releases: | |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCux91904 | Title: | "hardware multicast hw-hash" not seen in running-configuration |
|
Status: | Other |
|
Severity: | 5 Cosmetic |
Description: * | Symptom:Even after enabling the command "hardware multicast hw-hash", it is not seen in running-configuration. It shows as not enabled in "show run all'
N6K-957# sh run int po1
!Command: show running-config interface port-channel1 !Time: Mon Jun 20 18:00:45 2011
version 7.0(7)N1(1)
interface port-channel1 switchport mode trunk switchport trunk allowed vlan 20 speed 10000
N6K-957# sh run int po1 all | in mul no switchport block multicast no hardware multicast hw-hash storm-control multicast level 100.00
In the latest version:7.2(1)N1(1)
interface port-channel1 no description switchport switchport mode trunk no switchport monitor no switchport dot1q ethertype no switchport priority extend priority-flow-control mode auto lacp suspend-individual lacp min-links 1 lacp max-bundle 16 no port-channel port load-defer lacp fast-select-hot-standby lacp graceful-convergence no switchport block unicast no switchport block multicast hardware multicast hw-hash no hardware vethernet mac filtering per-vlan spanning-tree port-priority 128 spanning-tree cost auto spanning-tree link-type auto spanning-tree port type normal no spanning-tree bpduguard no spanning-tree bpdufilter logging event port link-status default logging event port trunk-status default speed 10000 duplex auto flowcontrol receive off flowcontrol send off negotiate auto
It shows enabled always even if remove the config from the port-channel. mtu 1500 delay 1 snmp trap link-status bandwidth 20000000 no bandwidth inherit storm-control broadcast level 100.00 storm-control multicast level 100.00 storm-control unicast level 100.00
Conditions: Workaround:None More Info:Required Changes for this bug already incorporated along with CSCux68595.
|
|
Last Modified: | 30-MAY-2016 |
|
Known Affected Releases: | 7.0(7)N1(1) |
|
Known Fixed Releases: | |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCut26977 | Title: | NX-OS: List of "Local suspended VLANs" should be mirrored on VPC peer |
|
Status: | Open |
|
Severity: | 6 Enhancement |
Description: * | Symptom: `show vpc consistency-parameters global` output shows only locally suspended VLANs under, i.e. VLANs under `Local Value`
Conditions: Behaviour is general for NX-OS.
Workaround: In order to get full picture on suspended VLANs, run command on BOTH VPC peers.
Further Problem Description: `Local suspended VLANs` is not a part of regular consistency-checker and provided for information ONLY.
Enhancement request opened in order to align outputs between peer-chassis.
|
|
Last Modified: | 20-MAY-2016 |
|
Known Affected Releases: | 7.0(5)N1(1) |
|
Known Fixed Releases: | |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCuu67644 | Title: | DFA Multicast traffic received on border leaves duplicated |
|
Status: | Open |
|
Severity: * | 6 Enhancement |
Description: | Symptom:With nexus 56xx/nexus 6000 series in a dynamic fabric automation network, multicast traffic received on border leaves may be duplicated towards the fabric and towards the receivers.
Conditions:- The multicast traffic is received on a vpc. - SVI for the vlan receiving the traffic on the vpc is configured for PIM
Workaround:Use L3 ports/Po/Sub-interfaces for external PIM peering towards Mcast Router. Use of SVI on Border leaf's for PIM peering is *currently* not supported and may lead to duplicates.
More Info:Future support for PIM peering over SVI in vPC+ Border leaf deployment is being evaluated and worked upon.
|
|
Last Modified: | 09-MAY-2016 |
|
Known Affected Releases: | 7.1(1)N1(1) |
|
Known Fixed Releases: | |
|
|
| |
| |
|
Alert Type: | New |
Bug Id: | CSCum14020 | Title: | dot1x: traffic flooding due to miss mac address in MAC table |
|
Status: | Terminated |
|
Severity: | 6 Enhancement |
Description: | Symptom: User config dot1x and sending bi-directional traffic.. but traffic is flooding to the vlan due to mac address doesn't keep showing up in MAC table.
Conditions: This only happened when user config mac aging timer = 1
Workaround: change mac aging timer > 50 , then Dot1x is working fine.. no more traffic flooding..
Further Problem Description:
|
|
Last Modified: | 07-MAY-2016 |
|
Known Affected Releases: | 7.0(0)N1(0.426) |
|
Known Fixed Releases: | |
|
|
| |
| |
|
Alert Type: | Updated * |
Bug Id: | CSCun63772 | Title: | Norcal 96 CLEM: "show interface transceiver *" not working correctly |
|
Status: | Fixed |
|
Severity: | 6 Enhancement |
Description: * | Symptom: TBD
Conditions: TBD
Workaround: TBD
Further Problem Description: TBD
|
|
Last Modified: | 05-MAY-2016 |
|
Known Affected Releases: | 6.0(2)N3(1.1.21) |
|
Known Fixed Releases: | |
|
|
| |
没有评论:
发表评论