Friday, July 29, 2016

Different Definitions of Serial Bytes Cause Failure of Interconnection Between OptiX OSN Equipment and OptiX Metro Equipment

When the OptiX OSN equipment and OptiX Metro equipment are interconnected, broadcast data services are available only if broadcast data ports are interconnected correctly.

Product

Fault Type

Equipment interconnection failure

Symptom

The S1 port on the OptiX OSN equipment is interconnected with the S1 port on the OptiX Metro 1000 to transmit a broadcast data service. The broadcast data service is however unavailable.

Cause Analysis

The OptiX equipment uses four unused overhead bytes to transmit broadcast data services.
The broadcast data ports on the OptiX OSN equipment are defined as follows:
  • For the OptiX OSN equipment, the Serial 1 byte is defined as the first unused byte following the D3 byte in the SDH RS overhead bytes. Generally, the Serial 1 byte corresponds to the S1 port.
  • For the OptiX OSN equipment, the Serial 2 byte is defined as the second unused byte following the D3 byte in the SDH RS overhead bytes. Generally, the Serial 2 byte corresponds to the S2 port.
  • For the OptiX OSN equipment, the Serial 3 byte is defined as the second unused byte following the D12 byte in the SDH MS overhead bytes. Generally, the Serial 3 byte corresponds to the S3 port.
  • For the OptiX OSN equipment, the Serial 4 byte is defined as the first unused byte following the D4 byte in the SDH MS overhead bytes. Generally, the Serial 4 port corresponds to the S4 port.
The broadcast data ports on the OptiX OSN equipment are defined as follows:
  • For the OptiX Metro equipment, the Serial 1 byte corresponds to the F2 byte and is defined as the second unused byte following the D12 byte in the SDH MS overhead bytes. Generally, the Serial 1 byte corresponds to the F2 port.
  • For the OptiX Metro equipment, the Serial 2 byte corresponds to the X1 byte and is defined as the first unused byte following the D4 byte in the SDH MS overhead bytes. Generally, the Serial 2 byte corresponds to the COM2 port.
  • For the OptiX Metro equipment, the Serial 3 byte corresponds to the X2 byte and is defined as the first unused byte following the D3 byte in the SDH RS overhead bytes. Generally, the Serial 3 byte corresponds to the COM3 port.
  • For the OptiX Metro equipment, the Serial 4 byte corresponds to the X3 byte and is defined as the second unused byte following the D3 byte in the SDH RS overhead bytes. Generally, the Serial 4 byte corresponds to the COM4 port.
The correspondence relationships between the serial ports on the OptiX OSN equipment and the OptiX Metro equipment are as follows:
  • The Serial 1 byte on the OptiX Metro equipment corresponds to the Serial 3 byte on the OptiX OSN equipment.
  • The Serial 2 byte on the OptiX Metro equipment corresponds to the Serial 4 byte on the OptiX OSN equipment.
  • The Serial 3 byte on the OptiX Metro equipment corresponds to the Serial 1 byte on the OptiX OSN equipment.
  • The Serial 4 byte on the OptiX Metro equipment corresponds to the Serial 2 byte on the OptiX OSN equipment.
The analysis shows that the broadcast data service is unavailable due to incorrect interconnection between the broadcast data ports on the OptiX OSN and OptiX Metro equipment.

Procedure

  1. Re-configure the broadcast data service according to the correspondence relationships between the broadcast data ports on the OptiX OSN equipment and OptiX SDH equipment.

Services Are Unavailable Due to Inappropriate Types of Optical Modules

Appropriate types of optical modules need to be selected to match transmission distances.

Fault Type

Unavailable services

Symptom

The OptiX OSN NEs at both ends of the link are configured with the SL4 boards. The SL4 boards are connected through fibers, but services are unavailable.

Cause Analysis

Possible causes of the fault are as follows:
  • The SL4 boards are faulty.
  • The fibers are faulty.
  • The optical distribution frame is faulty.
  • The optical modules are faulty.
  • The distance between the two NEs exceeds the maximum transmission distance supported by the optical modules.

Procedure

  1. Check the parameters associated with the SL4 boards. No exceptions are found. The boards, however, reported the R_LOS alarm, indicating the loss of signal.
  2. Use the optical time-domain reflectometer (OTDR) to test the fibers. The fibers are normal.
  3. Check the connections between the NEs and the optical distribution frame. The connections are normal.
  4. Check the optical modules on the SL4 boards. The optical modules are normal.
  5. Measure the distance between the two NEs. The distance is 26 km whereas the S-4.1 optical modules used on the SL4 boards support a maximum transmission distance of 15 km. Therefore, the services are unavailable due to inappropriate types of optical modules.
  6. Use the appropriate types of optical modules. Then, the fault is rectified.

Boards Cannot Be Added on the NMS Due to Mismatch Between the NE Software Version and the NMS Version

The PL3A board cannot be added on the NMS because the NE software version does not match the NMS version. This fault is rectified by upgrading the NMS.

Fault Type

Others

Symptom

The PL3A board is installed in slot 11 of the OptiX OSN 1500B. This board, however, cannot be added on the NMS. The NMS version is T2000 V200R004 and the NE software version is 5.21.17.13.

Cause Analysis

Possible causes of the fault are as follows:
  • The NE software is incorrect.
  • The NMS is faulty.

Procedure

  1. Add the PL3A board in slot 11 by running commands. The board is added successfully, indicating that the NE software is normal.
  2. Query the version mapping table. It is found that the NE software version does not match the NMS version.
  3. Upgrade the NMS to T2000 V200R006. Then, the fault is rectified.

Thursday, July 28, 2016

Can Huawei OSN 500 Support Multiple Service Access Modes?

Questions:
1, Where can I use the OSN 500?
2, Can Huawei OSN 500 Support Multiple Service Access Modes?
Answers:
1, The OptiX OSN 500 is a new-generation optical transmission equipment used at the access layer for leased line and mobile BTS access.
2, Support Multiple Service Access Modes, Such as CES and ATM/IMA, Flexible Networking.
The OptiX OSN 500 supports the circuit emulation service (CES) technology. With CES, the OptiXOSN 500 can directly receive and transmit E1 TDM and channelized STM-1 services in the pure packet domain. In this context, the TDM domain smoothly evolves to the packet domain.
Source: http://www.thunder-link.com/HUAWEI-MSTP_c138.html

How to check service board detail information?

The use of the Huawei OSN 8800

The OptiX OSN 8800 is mainly deployed in national backbone networks, regional/provincial backbones, and some metropolitan core sites. In addition to large capacities and long-haul WDM features, the OptiX OSN 8800 integrates:
• ROADM
• T-bit electrical cross-connection
• Full-granularity grooming ranging from 100Mbit/s to 100Gbit/s
• Optical-electrical ASON synergy
• 40G/100G transmission capability
• Various management and protection functions
Empowered by these features, the OptiX OSN 8800 provides carriers with end-to-end OTN/WDM backbone transport solutions to accommodate large-capacity grooming and ultra-wideband transmission.
Together, the OptiX OSN 8800 T64/T32/T16 and OptiX OSN 1800 can form a complete end-to-end OTN network. The OptiX OSN 8800 can also be used with the hybrid MSTP, PTN, or data communication equipment to achieve a complete transport solution.

According to information released by Ovum, a well-known consulting firm in the telecom industry, by 2012 Huawei led the global optical network market,WDM/OTN market, and 100G/40G market, as well as the backbone WDM market. Huawei WDM/OTN solution has served 39 of the worldwide Huawei MSTP top 50 carriers.

Why can not login S5700 after S5700 power on when the software version was upgraded to V200R005C00SPC500?


Wednesday, July 27, 2016

When Receive Optical Power Is Excessively Low Because of the End Face Problem of the Fiber Jumper, the OAU Board Reports the MUT_LOS Alarm

The receive optical power is excessively low because of the end face problem of the fiber jumper. The OAU board reports the MUT_LOS alarm.

Fault Type

Abnormal optical power
Fiber
MUT_LOS
R_LOS

Symptom

In a network consisting of the OptiX BWS 1600G equipment, a fiber jumper on the ODF is removed and then inserted when the RPC laser is enabled. In this case, the fiber connector is burnt out. After the fiber and fiber jumper are replaced, the receive optical power of the RPC is excessively low (-53 dB).
The OAU board at the station reports the MUT_LOS alarm.

Cause Analysis

Measure the optical power on the entire optical path. The results are as follows:
  • The transmit optical power on the FIU at the opposite station is normal.
  • The optical power on the receive side of the ODF at the local station is normal. That is, the line attenuation is normal.
  • The optical power at the receive end of the OAU board at the local station is excessively low.
  • The input optical power at the IN optical interface on the FIU at the local station is excessively low.
  • The fiber jumper between the SYS optical interface on the RPC and the IN optical interface on the FIU is normal.
According to the preceding results, it is determined that the exception is located between the ODF and the FIU. The possible causes are as follows:
  • The fiber jumper between the ODF and the RPC is faulty.
  • The RPC is faulty.

Procedure

  1. Check the fiber jumper between the ODF and the RPC. It is found that the fiber jumper of the FC/PC-LSH/UPC type is used. The connector for the LINE optical interface on the RPC used on the OptiX BWS 1600G is of the LSH/APC type. The end face of the UPC connector is flat while that of the APC connector is tilted. As a result, there is air spacing between the LINE optical interface on the RPC and the connector of the fiber jumper from the ODF, and therefore the pump light of the RPC cannot be normally transmitted on the optical path. Consequently, the receive optical power is excessively low. After the fiber jumper is changed to FC/PC-LSH/APC, the system is normal.

Result

The problem is resolved.

Reference Information

  • Disable the lasers on a Raman optical amplifier before performing any operations on the optical path with the Raman optical amplifier.
  • The optical module interfaces for Raman optical amplifiers provided by different equipment vendors are different. Huawei and certain vendors use LSH/APC while other vendors use LSH/UPC.



Routing protocol configuration (1)

RIP (Routing information Protocol) is the earlier application, using internal gateway protocol more general (Interior Gateway Protocol, referred to as IGP), applicable to small similar network, is a typical distance vector.
RIP through UDP broadcast message to exchange routing information, every 30 seconds to transmita routing update. RIP provides the hop count (hop count) as a yardstick to measure the routingdistance, hop count is a packet arrives on Cisco routers must go through a number of Cisco target.If the same goal with two different speed or different bandwidth Cisco Cisco router hop count, butthe same, RIP reports that two routing is equidistant. RIP supports a maximum hop count is 15,namely the number of source and destination network to go through in the most Cisco Cisco router15, hop 16 unreachable.
1 the command
Mission Command
Specify the use of RIP protocol of router rip
Specifies the RIP version of version {1|2}1
Specifies the Cisco Cisco router connected to the network of network network
Note: the 1.Cisco RIP version 2 supports authentication, key management, route summarization,classless inter-domain routing (CIDR) and variable length subnet mask (VLSMs)
The 2 example
Router1:
Router rip
Version 2
Network 192.200.10.0
Network 192.20.10.0
!
Debugging commands:
Show ip protocol
Show ip route

S3700-28TP-EI-DCS3700-28TP-EI-24S-ACS3700-28TP-EI-MC-AC



Tuesday, July 26, 2016

An NE Is Frequently Unreachable to the NMS Due to Insufficient Processing Capacity of a Router

An NE is frequently unreachable to the NMS due to insufficient processing capacity of a router.

Product

OptiX BWS 1600G

Fault Type

NEs are unreachable.

Symptom

At a certain site, there are four WDM networks consists of 44 subracks of the OptiX BWS 1600G, two iManager T2000 servers, and the iManager T2000 clients on computers.
All equipment is GNEs. On the T2000, each network is configured with two gateways and an extended ECC is used for communication inside a network.
Each network is connected to a HUB with the T2000 server and client by a network cable. The 44 equipment is monitored by the T2000. T2000 displays that the NE communication is abnormal. NEs are unreachable randomly. In addition, the NE_COMMU_BREAK and NE_NOT_LOGIN alarms are reported and cleared automatically a certain while later. After multiple GNEs are configured on the T2000, the T2000 can re-monitor NEs temporarily. The problem occurs at a lower frequency; however, the problem is not resolved completely.

Cause Analysis

The possible causes of the preceding problem are as follows:
  • Equipment problems, such as a fault on the SCC board and improper ECC settings, may result in abnormal data flow.
  • NMS problems, such as an abnormal database, network card problems, and improper NMS settings.
  • DCN networking problems, such as incorrect network of the DCN, a fault on a router or switch, and network cable problems.
After an analysis, it is concluded that before the IP address of Sever 2 is changed, the 44 subracks of the four WDM systems cannot communicate with Server 2 by the switch directly. Because the IP addresses of the subracks are 132.37.23.**, while that of Server 2 is 132.37.5.**, the subracks and Server 2 are not in a same segment. Thus the subracks cannot communicate with Server 2 by a switch directly. The four WDM networks are monitored by Server 2. In this case, the data flow direction is: the equipment<--->the switch<--->the router<--->the switch<--->the server. There are 140 NEs in the four networks. Thus, the data flow is heavy and all data is forwarded by the router. The router, however, a 2630E router of an early age and with low-end technology, is configured with only one FE port. Communications of the two proceeding segments are forwarded through the IP addresses (of the two segments) configured at the same EO port. The bottleneck of the processing capacity of the router causes the network communication abnormality. Communications may be normal when the data volume is small. Once the data volume is larger, congestion of data packets is serious. As a result, NEs are unreachable. After the IP address of Server 2 is changed, the IP address of Server 2 and those of the added four DWDM systems are in the same segment. In this case, the data flow direction is: the equipment<--->the switch<--->the server. The communication between the four systems and Server 2 is implemented by the switch only and router forwarding is no longer required in the communication, thus greatly easing the processing load of the router and avoiding the bottleneck of the processing capacity of the router. The networks are smoother. Thus the preceding problem is resolved.
As a network becomes larger and more equipment is added in the network, the network structure is more and more complicated. If a network is lack of an overall planning at the early stage, communication problems at the later stage are of a great possibility. In addition, some causes for communication problems are difficult to detect, some causes are on the equipment, NMS, or network environment and need to observe for a period to see whether a processing step is effective after the processing step is performed. This consumes a lot of time and efforts. Therefore, at the early stage of project planning, not only service requirements but also the DCN network environment (such as the configuration, module and processing capacity of the router) should be taken into account.

Procedure

  1. In an early DCN structure, equipment and servers are connected through a HUB. In the HUB, data packets are broadcasted. This makes a bottleneck of processing capacity of the HUB. In addition, when running a ping command to the equipment, obvious packet losses occur in the HUB. Therefore, it is preliminarily suspected that the HUB is faulty. After replacing the HUB with a Layer 2 24-port switch, the equipment can be connected by running a remote ping command. Large packets are normal and CML tools can log in to NEs. The operation is improved; however, the problems that NEs are unreachable randomly last in Server 2. In System A, the problems are serious and certain sites can be hardly logged in to. Then a conclusion is inferred preliminarily that the HUB has certain impact on the NE communication, but it is not the main cause of the problems. Problems in System A are much more serious than that of other three systems. Therefore, it is suspected that the settings of a certain ECC or a network cable connection is faulty in System A.
  2. After the settings of ECC and routing of network cables in System A are checked, T2000 restores the monitoring on NEs. But after a few days' observation, the problems that NEs are unreachable for a long time are reduced obviously. Alarms, that are reported on the T2000, such as an NE_COMMU_BREAK alarm indicates that the NE communication is interrupted and an NE_NOT_LOGIN alarm indicates that an NE is not logged in to, however, indicate that NEs of all systems are unreachable transiently in Server 2. Thus a conclusion is inferred that the settings of ECC and routing of network cables in System A have certain impact on the NE communication, but are not the main cause of the problems. Problems in Server 1 which is located in the same equipment room as Server 2 are rare. Therefore, it is suspected that Server 2 is faulty.
  3. Upload the data of certain NEs of the four systems to Server 1 and observe the operation. In addition, re-install the operating system in Server 2 and re-install the T2000. In the next few days of observation, however, it is found that alarms indicating unreachable NEs transiently are not cleared in Server 2 and alarms indicating that the added NEs are unreachable are reported in Server 1. It is inferred that Server 2 is not faulty and is not the main cause of the problems. On Server 1 which is located in the same equipment room, original NEs are reachable and only the newly added NEs are unreachable frequently. In addition, the IP addresses of the four systems are not in the same segment with those of the two servers or old equipment. Therefore, it is suspected that the router is faulty in forwarding.
  4. According to analyzed on site by specialists in data communication and analyses with development engineers in optical network on related data in the T2000 logs, it is found that the main cause for the problems (NEs are unreachable) is network congestion. The data that the T2000 transmits to NEs cannot be sent out in time, which also proves the inference in Step 3. The forwarding capacity of the router may be insufficient, which resulting in congestion. Packets cannot be sent out in time. Thus, NEs are unreachable.
  5. Change the IP address of Server 2 (132.37.23.254), subnet mask (255.255.255.128), gateway (132.37.23.129) to make sure that the IP addresses of the server and added equipment are in the same segment. In this case, the communication between the equipment and T2000 is implemented directly by the switch and no router forwarding is required. After the modification, all the NEs in the four systems can be re-monitored normally. In the observation for a week, the alarms indicating that NEs are unreachable transiently are completely cleared.

Result

The problem is resolved.

Reference Information

None.


How to Set Huawei S5700 VLAN?

Question:
How to set VLAN on Huawei S5700-SI switch,
S5700-SI IP: 192.168.1.200
Server IP: 192.168.1.8
S1700 IP:192.168.1.X
I want to know how to separate different VLAN and how to connect different VLAN on Huawei switch?
Answer:
Set IP for each VLAN, then it can automatically generate route, then different VLAN can be accessed.


More blog:

Why Automatical configuration backup cannot work on S5700

Huawei HG8010 GPON EPON Terminal Configuration

Question: what is the default username and password of Huawei Echolife HG8010 terminal?
Answer: default username: root password: admin
Question: How to login to Huawei Echolife HG8010 terminal?
Answer: Be default, set your computer IP 192.168.100.5, visit: 192.168.100.1
Question: Does HG8010 have router function?
Answer: No
Question: How to get wireless function with HG8010 GPON EPON terminal?
Answer: You need to add a wireless router to HG8010
Question: How much for Huawei HG8010 GPON?
Answer: From Huanetwork.com, FOB Hong Kong 40 USD
Question: How much for Huawei HG8010 EPON?
Answer: From Thunder-link.com, FOB Hong Kong 36 USD,
Question: Which one I need to use, HG8010 GPON or HG8010 EPON?
Answer: You may need to check with your internet service provider, which to use, as if choose the wrong terminal, you may not able to use.


How to Delete a Console Login Password?


Huawei HG8240 GPON EPON FAQ

Question: What is the defualt user name and password for Huawei HG8240?
Answer: By default, administrator model, User name: telecomadmin, Password: admintelecom, Common user, User name: root, Password: admin
Question:What is the default HG8240 IP address and Subnet Mask?
Answer:IP address: 192.168.100.1, Subnet mask: 255.255.255.0
Question: If want to connect to Huawei HG8240, how to set IP address and subnet mask of the PC?
Answer:
Set the IP address of the PC to be in the same
subnet as the LAN IP address of the HG8240/
HG8245/HG8247.
For example:
l IP address: 192.168.100.100
l Subnet mask: 255.255.255.0
Question: Why my ONT registration failure?
Answer:
You may need to check below possible reason:
1, The PON terminal goes online in an
incorrect mode.
2, The optical fiber connected to the ONT is of
poor quality or is loosely connected.
3, The optical power of the ONT is not within
the normal range.
4, The minimum and maximum logical
distances configured on the OLT port to
which the ONT is connected are inconsistent
with the actual distances.
5, The ONT auto-find function is disabled on
the OLT port.
6, When the ONT is added, the configured SN
of the ONT is different from the actual ONT
SN.
7, An ONT with the same SN is already
connected to the OLT.
8, The ONT is a rogue ONT
More related:

When customized for North America unable to configure OSN1800.

Monday, July 25, 2016

The External Clock Source Is Unavailable After a Fiber Cut Because the Clock Configuration Is Incorrect

The priorities of the clock sources are set incorrectly, so the external clock source is unavailable after a fiber cut. After the priorities of the clock sources are re-set, the fault is rectified.

Fault Type

  • Bit errors
  • LTI

Symptom

The EOW board on NE A is connected to two clock sources from third-party equipment, the network clock tracing is normal. After a fiber cut occurs on the ring network, the external clock source becomes unvailable. In addition, NE D reports the LTI alarm, and a large amount of bit errors occur.
Cause Analysis
In normal cases, the working path of the clock tracing is external clock source->NE A->NE B->NE C->NE D->NE F; the protection path is external clock source->NE A->NE F->NE D->NE C->NE B. After a fiber cut occurs on the ring network, the clock source of NE D is lost. After the clock configuration of the network is checked, it is found that System Clock Source Priority Table on NE D is not configured.

Procedure

  1. Query the clock subnet configuration by using the NMS.
    1. In the NE Explorer of NE D, choose Configuration > Clock > Clock Subnet Configuration.
    2. Choose the Clock Quality tab, and Choose the Clock Source Quality. Click Query. The return shows that the system receives the G.811 primary reference clock (PRC) from the 11-SL64 board.
    3. Choose View > Clock View to obtain the clock tracing relationship of NE D. The west clock source (on NE C) next to NE D is unavailable, so NE C should trace its east clock source.
  2. In the NE Explorer of NE D, choose Configuration > Clock > Clock Source Priority. Then, select Priority Table for Phase-Locked Sources of 2nd External Clock Output, and click Create to add 8-SL64 and 11-SL64 as clock sources.
  3. In the Clock View, refresh the clock tracing relationship. Then, the alarm clears.

An NE Is Frequently Unreachable to the NMS Due to Insufficient Processing Capacity of a Router

An NE is frequently unreachable to the NMS due to insufficient processing capacity of a router.

Fault Type

NEs are unreachable.

Symptom

At a certain site, there are four WDM networks consists of 44 subracks of the OptiX BWS 1600G, two iManager T2000 servers, and the iManager T2000 clients on computers.
All equipment is GNEs. On the T2000, each network is configured with two gateways and an extended ECC is used for communication inside a network.
Each network is connected to a HUB with the T2000 server and client by a network cable. The 44 equipment is monitored by the T2000. T2000 displays that the NE communication is abnormal. NEs are unreachable randomly. In addition, the NE_COMMU_BREAK and NE_NOT_LOGIN alarms are reported and cleared automatically a certain while later. After multiple GNEs are configured on the T2000, the T2000 can re-monitor NEs temporarily. The problem occurs at a lower frequency; however, the problem is not resolved completely.

Cause Analysis

The possible causes of the preceding problem are as follows:
  • Equipment problems, such as a fault on the SCC board and improper ECC settings, may result in abnormal data flow.
  • NMS problems, such as an abnormal database, network card problems, and improper NMS settings.
  • DCN networking problems, such as incorrect network of the DCN, a fault on a router or switch, and network cable problems.
After an analysis, it is concluded that before the IP address of Sever 2 is changed, the 44 subracks of the four WDM systems cannot communicate with Server 2 by the switch directly. Because the IP addresses of the subracks are 132.37.23.**, while that of Server 2 is 132.37.5.**, the subracks and Server 2 are not in a same segment. Thus the subracks cannot communicate with Server 2 by a switch directly. The four WDM networks are monitored by Server 2. In this case, the data flow direction is: the equipment<--->the switch<--->the router<--->the switch<--->the server. There are 140 NEs in the four networks. Thus, the data flow is heavy and all data is forwarded by the router. The router, however, a 2630E router of an early age and with low-end technology, is configured with only one FE port. Communications of the two proceeding segments are forwarded through the IP addresses (of the two segments) configured at the same EO port. The bottleneck of the processing capacity of the router causes the network communication abnormality. Communications may be normal when the data volume is small. Once the data volume is larger, congestion of data packets is serious. As a result, NEs are unreachable. After the IP address of Server 2 is changed, the IP address of Server 2 and those of the added four DWDM systems are in the same segment. In this case, the data flow direction is: the equipment<--->the switch<--->the server. The communication between the four systems and Server 2 is implemented by the switch only and router forwarding is no longer required in the communication, thus greatly easing the processing load of the router and avoiding the bottleneck of the processing capacity of the router. The networks are smoother. Thus the preceding problem is resolved.
As a network becomes larger and more equipment is added in the network, the network structure is more and more complicated. If a network is lack of an overall planning at the early stage, communication problems at the later stage are of a great possibility. In addition, some causes for communication problems are difficult to detect, some causes are on the equipment, NMS, or network environment and need to observe for a period to see whether a processing step is effective after the processing step is performed. This consumes a lot of time and efforts. Therefore, at the early stage of project planning, not only service requirements but also the DCN network environment (such as the configuration, module and processing capacity of the router) should be taken into account.

Procedure

  1. In an early DCN structure, equipment and servers are connected through a HUB. In the HUB, data packets are broadcasted. This makes a bottleneck of processing capacity of the HUB. In addition, when running a ping command to the equipment, obvious packet losses occur in the HUB. Therefore, it is preliminarily suspected that the HUB is faulty. After replacing the HUB with a Layer 2 24-port switch, the equipment can be connected by running a remote ping command. Large packets are normal and CML tools can log in to NEs. The operation is improved; however, the problems that NEs are unreachable randomly last in Server 2. In System A, the problems are serious and certain sites can be hardly logged in to. Then a conclusion is inferred preliminarily that the HUB has certain impact on the NE communication, but it is not the main cause of the problems. Problems in System A are much more serious than that of other three systems. Therefore, it is suspected that the settings of a certain ECC or a network cable connection is faulty in System A.
  2. After the settings of ECC and routing of network cables in System A are checked, T2000 restores the monitoring on NEs. But after a few days' observation, the problems that NEs are unreachable for a long time are reduced obviously. Alarms, that are reported on the T2000, such as an NE_COMMU_BREAK alarm indicates that the NE communication is interrupted and an NE_NOT_LOGIN alarm indicates that an NE is not logged in to, however, indicate that NEs of all systems are unreachable transiently in Server 2. Thus a conclusion is inferred that the settings of ECC and routing of network cables in System A have certain impact on the NE communication, but are not the main cause of the problems. Problems in Server 1 which is located in the same equipment room as Server 2 are rare. Therefore, it is suspected that Server 2 is faulty.
  3. Upload the data of certain NEs of the four systems to Server 1 and observe the operation. In addition, re-install the operating system in Server 2 and re-install the T2000. In the next few days of observation, however, it is found that alarms indicating unreachable NEs transiently are not cleared in Server 2 and alarms indicating that the added NEs are unreachable are reported in Server 1. It is inferred that Server 2 is not faulty and is not the main cause of the problems. On Server 1 which is located in the same equipment room, original NEs are reachable and only the newly added NEs are unreachable frequently. In addition, the IP addresses of the four systems are not in the same segment with those of the two servers or old equipment. Therefore, it is suspected that the router is faulty in forwarding.
  4. According to analyzed on site by specialists in data communication and analyses with development engineers in optical network on related data in the T2000 logs, it is found that the main cause for the problems (NEs are unreachable) is network congestion. The data that the T2000 transmits to NEs cannot be sent out in time, which also proves the inference in Step 3. The forwarding capacity of the router may be insufficient, which resulting in congestion. Packets cannot be sent out in time. Thus, NEs are unreachable.
  5. Change the IP address of Server 2 (132.37.23.254), subnet mask (255.255.255.128), gateway (132.37.23.129) to make sure that the IP addresses of the server and added equipment are in the same segment. In this case, the communication between the equipment and T2000 is implemented directly by the switch and no router forwarding is required. After the modification, all the NEs in the four systems can be re-monitored normally. In the observation for a week, the alarms indicating that NEs are unreachable transiently are completely cleared.

Result

The problem is resolved.

Communication Anomaly Occurs Between the Equipment and the NMS Due to the Setting of the Firewall

If the equipment and the NM server communicate unidirectionly, disable the firewall or antivirus software to restore the communication.

Product

Fault Type

  • DCN fault

Symptom

During the system commissioning, it is found that the NM server can communicate with the OptiX OSN equipment by using the ping command, but the PC used on the equipment side cannot communicate with the NMS.

Cause Analysis

The firewall or antivirus software is used on the NM server.

Procedure

  1. Replace the NM server with a PC, and use the ping command to test the DCN. The result shows that the communication is normal.
  2. Disable the firewall or antivirus software on the NM server. The equipment side can communicate with the NM server by using the ping command.

How to do when Failing to Connect the 155 Mbit/s Optical Port on the Router of Company C