Legacy Systems of Electrical Utility Industry – Part 2

Posted March 30, 2013 by wangx49931
Categories: Energy, Legacy System, SOA

Part 2:  : Business Vision and Strategy

Vision:

If electricity is still around in next 20 years, what kind of products/capabilities a product vendor or a solution provider will offer for utilities and customers to manage their power?

Traditional products and their existing functionalities will not meet the emerging needs. The boundaries between different produc need to be broken in order to provide customer driven functionalities. For instance, customers are not only looking at operation from distribution network perspective. New data sources from meters and DA devices provide new dimensions for operation and control. At the same time, with the penetration of distributed generation, the traditional monolithic control will not be a feasible solution at all. Coordination between control center systems and field devices requires a broken down of the traditional function boundaries of existing products.

A lot of existing network monitoring, operation, and control capabilities are tightly coupled within its legacy products. Most vendors and solution providers currently don’t have the capability of packaging different functions with flexible deployment options when meeting customer’s emerging needs. As a result, most vendors still have to sell their monolithic products, not capabilities based on customer’s needs. A flexible way of packaging different features with various combination will greatly improve vendor’s competiveness in the market, and also the satisfaction from utility customers.

Market leading product vendors need to think about how to change their roles from vendor to trusted advisor. As a trusted advisors they will work with the customer executives to explore emerging needs and direction on a confidential basis, similar to the role of a consultant.  At this level, in the context of the decision making process, trusted advisors really don’t enter or exit. They are part of the circle.  The customer and advisors can validate each other’s objective with external perspective and identify an issue together and assess the problem and create a solution.

Strategy:

Legacy systems are a significant business asset. Critical business data and processes are often managed in legacy systems. Protecting the existing investment and gain incremental values from these systems are very important. Key leading product vendors’ portfolio were mostly established through numbers of acquisitions. Each product serves the intended market very well. However they all have great challenges to meet the emerging needs. None of the products was designed for smart grid end to end solution. As a result of trying to patch together systems that were never intended to integrate, long term effects, including wasted time, loss of money and an inability to innovate, can occur indefinitely. The lengthy lifetimes of legacy systems, often spanning several generations of hardware renders their core technologies fragile, largely obsolete, and especially difficult to integrate with new SOA services. A common and consistent approach should be taken to transition the architecture from siloed legacy systems to a completely integrated modern software solution. This is not a problem of dealing with one particular legacy system, it is about to modernize the whole product portfolio. Key product vendors should consider providing the following capabilities to enable them to change the role to trusted advisor :

Enterprise Architecture: including the business vision, goals, objectives and value propositions, as well as the strategies and tactics that will be used to achieve them; capabilities, services, events, information, roles, locations, organization and terminology; the business scenarios, processes, applications, services, components, data, personnel and other elements that support or implement business functions; the specific hardware and software.

Platform Based Solution: Horizontally support integration capability with both internal and external products to enable business process automation. Vertically be able to turn business data into business decisions; Fast time to market for integration and new features; Component based approach that common feature can be deployed with any relevant existing product. Features can be developed beforehand. Free valuable resources from legacy systems and make them more agile based on the platform enabled solution. Resource will be trained and work on more common and consistent IT stack, well defined interfaces, and modern technologies. Knowledge sharing and code reuse are much easier, not dependent on motivation or culture anymore, but rather built into the platform and technology stack.

 

 

 

Legacy Systems of Electrical Utility Industry – Part 1

Posted March 17, 2013 by wangx49931
Categories: Energy, IT, Legacy System, SOA

Part 1 : Industry Emerging Needs and Business Challenges

Industry Emerging Needs

One of the most demanding needs is the interoperability between various function, products, and systems. From software system perspective, interoperability means two or more systems can share certain data and carry out business processes that requires involvements from all the systems. Without such interoperability, the same business process needs to be executed with extensive human intervention which result cost and efficiency problems. A well-known use case in this area is the billing process automation with AMI meter reading data, which requires interactions between CIS, AMI, and potentially other systems. From grid monitor and operation perspective, the requirements of integrating OMS and DMS functions, EMS with PMU data, OMS with AMI events, DMS and Substation Automation functions, are all based on the interoperability between those systems.

Harvesting more values from new data provided by systems like AMI, PMU, and DA is the ultimate goal for utilities. Data coming from different sources contains valuable information. In addition to leveraging them for business process automation, creating capabilities of coordination the data and transform it into actionable business decisions would enable utilities to make conscious decisions in management, planning, operation, and control. Use cases like wide area monitoring and control is a good example of leveraging PMU data to improve the reliability of the transmission network. Using AMI power up/down event and coordinate with network event is another good example to improve the outage management capabilities for distribution network. Demand side management or demand response provides utilities the capability of managing peak demand and energy cost. In general new data coming for different sources can provide the following grid level capabilities:

Grid Visualization:

•Provide a holistic view of the network including distribution substations, feeders, premises, EV, DER, and meters.
•Provide both geographical and schematic views with connectivity, topology and live SCADA, DA, meter, outage, crew data.

Grid Situation Awareness:

•On-line analysis – Near real-time analytics to provide information about network voltage profile, power factor, reactive power, operation prediction, etc. to give operators critical information to maintain the reliability of the system with the most efficient way
•Off-line analysis – Analytics based on historical data to provide histories about voltage profile, power factor, power losses, stealing, patterns, etc. to give decision makers critical information regarding network extension, construction, and maintenance

Smart Grid Optimization & Control:

•Switching Optimization
•Voltage & Var Control
•Integrated EV and DER Control

The global microgrid market is growing up, according to cleantech research firm Pike Research. Peter Asmus, Pike senior analyst, explains: “A wide range of electricity users are demonstrating strong demand for power generation and distribution systems that can be operated independently from the utility grid. A few of the market drivers include concerns about grid reliability, rising costs of fuel, broader availability of distributed generation technologies, and a drop in prices for some nontraditional energy sources such as solar photovoltaic systems.”

Increasing demands for operational data from utilities have been seen from regulator bodies and customers. Customers want to know more information about what is going on in the network and how their services get impacted. For instance, a better communication (which areas are impacted, how long it will take, the progress of work, etc.) between utilities and customers during storm condition is much-needed.

Business Challenges

Most of the existing EMS, DMS, OMS, and GIS products are in the slow-moving sectors. The corresponding market is getting more and more maturer. With product getting commoditized and with more competition, the profit margin drops accordingly. If product vendors keep selling product without major changes, it means that they have to stay with the slow-moving market and figuring out ways to cut cost to maintain profit. At the same time, many of the products overlap with each other in many ways and this actually may confuse customers, which in turn may hurt opportunities in the market.

All product vendors face some level of fundamental disconnect between existing product portfolio and emerging needs. The new functions and capabilities need to be built with richer, scalable, modern technologies focused on grid level operation. It requires integration with various systems and environment of supporting new applications and functions. However, most business critical data and processes are stored and maintained in the legacy systems, moving forward with modern technology while protecting and leveraging existing systems is a great challenge to face:

1.Where new functionalities live – Most of the new functions require a hosting environment where business critical data (network model, status, outage, etc.) can be provided. Due to the overlap of different products, some of the new functions are actually required by many of the existing legacy systems. It requires a cost-effective approach to resolve this problem to avoid potential investment duplication
2.How new functionalities interact with legacy functions – Interacting with legacy systems or 3rd part systems imposes another challenge since each system has different data format, interface technology, and stack. Without a systematic approach, integration with these systems could easily get to an unmanageable nightmare.
3.Simplified/Unified user experience from installing, configuration, maintaining, and visualization – Each of existing products has a different way to do all these things. When introducing more functions to an existing portfolio and thinking about grow your business, you need to think about what kind of  user experience to provide to your customers.

Another challenge a product vendor may face is the coordination between different organizations and product lines. Traditional operation style actually might be a road blocker for the business to move to a direction that blurs the boundaries of different product lines, provides emerging capabilities based on customer needs and quickly adapts to new requirements.

High cost of managing existing portfolio could be a result of multiple reasons. Resources spend great deal amount of time maintaining and rewriting legacy code in an attempt to keep up with user demands that they have little if any time to create new or unique functionality to improve business processes. As a result, it is common for these IT staffs to have low morale and high turnover. The resource pool of legacy systems is getting smaller and maintaining the these resource become huge issue. Given the virtually limitless possibilities of technology that exists today, most programmers would prefer to create new and different applications and solutions, rather than simply fixing what is broken. Integration of multiple legacy systems could easily fall in the trap of creating point to point interfaces based on the existing capabilities, which results in long term maintenance and upgrade issues that will translate in to cost. Another key factor is that new functions are put on a legacy system that is deviate from the original purpose for which the system was designed for.

SOA strategy

Posted June 22, 2011 by wangx49931
Categories: SOA

If I have ran into this topic a couple of years ago, I would have just laughed at it and thought it is just another example that people try to make things more complicated than it needs to be. However after a two year intensive involvement in a SOA product, I start believing that SOA strategy is a must have especially for big organizations.

At the very beginning, my fellow colleagues and I started this development from a more technical perspective. We brought in a lot industry best practices, expertise, best known components and built a nice platform which technically has a lot good features and functions for other teams to adopt. To our surprise, the adoption process has been so painful and we constantly got pulled into no-sense discussions and got very frustrated. We tried to convince people that the technology is the best fit for them, we did demos, prototypes, everything possibly could be done. However, the progress made was very little.  We started wondering what we did wrong, how we could change the situation.

I have to step back to look at the problem from another angle. It is not just a technical issue here. Actually non-technical aspect like, business, organization, politics, and human nature play more roles in this turmoil. First of all, I found out that siloed business structure is the biggest obstacle for adopting new technologies or any technology changes. Each silo only looks at its own problem, doesn’t have interests to see what makes sense across the board. Second, understanding how the technology can help transforming the business is the key. Without buy-in from both business and high level executives, any attempt from technology side would fail. I ran across the following website and like the SOA strategy described there.

http://steerahead.com/soastrategy.htm

It provides me a fresh mind to think about the problem from a different aspect.

Leverage CIM in your Service Design

Posted January 5, 2010 by wangx49931
Categories: CIM, Energy, IT

This is not a new topic, actually it has been one of the industry’s best practices and everybody does that today. Why post a blog about it?

Let me get to the point right away. An architecture decision you have to make when leveraging CIM in the service design is how to represent CIM in XML Schema on which your service definitions (WSDLs) will be based. In general, there are two approaches:

  1. Define base schema(s) including all necessary CIM classes and your extensions. All your services (WSDLs) reference the base schema(s).
  2. Use IEC61986 standard XSDs or create your own XSDs for certain data exchanges, and build WSDLs based on those XSDs.

The fundamental difference between them is that approach #1 only has one representation for each class, while approach #2 may have multiple for each class in different XSDs if each XSD has a unique namespace.  Pros and Cons:

Approach #1

Pros:

  • Single representation for each class, easy to understand and manage

Cons:

  • Hard to define data restrictions for each data exchange or interaction scenario

Approach #2

Pros:

  • Relative easy to define data restrictions for each data exchange or interaction scenario

Cons:

  • Since there might be multiple representations for each class, it is hard to manage especially if you need to coordinate activities between different services. For instance, if you have a meter event service and a meter ping service each of which has a meter definition in the corresponding XSD. In your code (let’s assume you have generated your java code based on your WSDLs and XSDs), you will find two meter classes in different packages. If you have a business process to first capture a meter up event, then issue a meter ping to confirm, the business process has to handle two meter classes in the two services although they mean the same thing.

If you only care routing message around and stitching systems together, I guess approach #2 is not a bad choice. If you want to provide an environment for application development, business process automation, service orchestration, or anything that requires coordination between different services, my recommendation is approach #1 if you don’t want to be killed by your developers 🙂

Quick note here, WSDL doesn’t imply Web Service implementations. It is used to define the “interfaces”, but implementations of a service can vary.

Service Oriented and Object Oriented

Posted December 30, 2009 by wangx49931
Categories: IT, SOA

I was recently involved in a Smart Grid application prototype that requires a great deal of integration of different systems and products to provide data needed by the app. The overall system architecture is based on SOA and we were successfully defined, developed, deployed, and managed all the services for the prototype. An interesting question was raised during one of the demos:  What is the difference between Service Oriented and Object Oriented?

Thomas Erl did a comprehensive comparison in the book of “SOA Principles of Service Design”. Based on my experiences, I agree with the author that service orientation and object orientation are complimentary and can be used separately and together. In the prototype, the overall integration (service modeling, invocation, message routing, integration pattern, orchestration, etc) is based on SOA. While most of service implementations were based on object oriented concepts. Thomas pointed out that service orientation aims to harmonize a larger amount of the enterprise or ideally the enterprise as a whole and object orientation has been applied to segments of the enterprise. I think this is the key distinction between the two and service orientation is designed for increased scope.

Thomas compared fundamental concepts between the two architectures. Encapsulation, inheritance, generalization and specialization, abstraction, polymorphism, open-closed principles, don’t repeat yourself, single responsibility, delegation, association, composition, and aggregation were covered. Some of them are very similar, some of them are not between the two. There are several things that I am interested in:

1. Inheritance – this is not practically applied to service orientation due to focus on loose coupling and service autonomy. We had a brief discussion during the prototype about how to auto generate some code based on services. We excluded the service inheritance from the architecture so that services have less dependencies. It doesn’t seem we need such capability at all.

2. Polymorphism – In object orientation, polymorphism means the same request sent to different subclasses will have difference results based on the variance of the subclass implementation. In my opinion, similar concept applies to service orientation as well, but from different angle. For the same service contract, there could be more than one implementations. Service orientation should allow users to determine which implementation to use both during the deployment and run time. Such kind of capability may already exist in some SOA platforms. We used spring injection for deployment time binding in the prototype which helped us pick up a different implementation for a service.

3. Relationship – Object relationships are well modeled in the object orientation world. Concepts like association, composition, and aggregation are widely supported. Personally, I don’t think relationship between services are well modeled and supported in the service orientation world. I understand that services should be relatively independent so that one can argue relationships are not needed. In reality, service composition, orchestration, business processes, integration pattern all require understanding the relationship between services. Without such capability, it would be hard to manage the interactions between services.

Another aspect worth mentioning might be tooling. Object orientation seems to have a lot more tooling supports. Most SOA platforms are somewhat proprietary which will cause rework when moving from one to another.

Smart Grid Solutions: Integration vs. Application

Posted July 2, 2009 by wangx49931
Categories: Energy

Tags: ,

I would like to share some of my thoughts on the roles that integration and application play in Smart Grid Solutions. Wish to hear more opinions in this area.

Smart Grid is a huge topic. This post is only focused on back office solutions for Smart Grid.

There are two major areas that Smart Grid Solution can help utilities to improve, operation efficiency and energy efficiency. Giving each area an example. With smart meter technology and its data collection and management capabilities (head end system, MDM, etc), utilities will be able to automate many existing business processes (meter reading, billing, customer move in/out, rate change, outage management, etc) which are currently involve a lot of manual intervention. Such upgrade will have huge positive impact on operation efficiency. Energy efficiency can be improved by appropriately managing the peak load, line losses and power quality. Traditionally, utilities have very limited approaches to control the load other than load shedding and lowering the voltage at certain spots within the operational limit. With demand response capabilities, new business processes can be implemented to manage the peak load and improve energy efficiency.  In general, operation and energy efficiency can be improved by either automating existing business processes or implementing new business processes.

System integration plays a key role in automating existing business processes. Currently AMI is one of the biggest drivers for Smart Grid and almost every smart meter installation requires huge amount of efforts of system integration. Various systems (CIS, MDM, WMS, OMS, etc) depending on the project scope need to exchange information with each other in order to automate related business processes. I have seen a lot integration projects like that and the essence is moving data around. I believe such integration efforts will still dominate the market for the next 3-5 years.

Is system integration the answer? Well, no, because I just can’t see Smart Grid Solution is simply a system that stitches various component together. Don’t get me wrong, it is  an important aspect, but there are huge potential under the data that siloed in each components. You probably already know the direction I am going. Yea, smart grid is not just about automating existing business process and stitching systems together, it is all about how to utilize the new data sources to discover things we never discovered before and make smarter decisions.

The reality is that data is collected and maintained by different systems, SCADA for telemetry, head end for meter data, CIS for customer data, grid operation for network model, Asset Management for assets, etc. Smarter analytics and applications require fully integrated data. For example demand response requires real-time network model, customer information, meter data, load forecast, price, etc. Considering the size of the system, the amount of data, and system quality attributes, it present a great challenge to the industry regarding how to manage and integrate data. I believe this is going to be a battle field for SIs and product vendors simply because who controls data who controls the destiny.

Integration is one of the most important enabling technologies for smart grid. On top of it, applications will deliver the ultimate benefit to utilities and customers.

Add a static xml string to a XML document

Posted July 1, 2009 by wangx49931
Categories: IT

Tags: ,

Yesterday, my colleague and I were doing the last minute programming before he left for vacation. We needed to add static KML style information in a KML file. Since we were in a great hurry, we simply followed the straightforward DOM API of appending node to the output XML document. One of the styles looks like:

<Style id=”yellowLineGreenPoly”>
<LineStyle>
<color>7f00ffff</color>
<width>4</width>
</LineStyle>
<PolyStyle>
<color>7f00ff00</color>
</PolyStyle>
</Style>

What we did was creating node for each element and append it to the parent node. You can imagine the pain of going through this practice with many styles. This morning, my colleague is gone (somewhere in Central America, enjoying his resort and beach) and I started searching internet for better solutions. In general, there are two ways:

1. Put the static style information in a file, built a document for that file and import the style nodes to the ouput document. The key point is using import Node function of the output document object. It looks like:

kmlDoc.appendChild(outputDoc.importNode(style, true));

2. Make the static XML a string and append the string to the output document. I found code available at:

http://stackoverflow.com/questions/729621/convert-string-xml-fragment-to-document-node-in-java. The function is:

public static void appendXmlFragment(
DocumentBuilder docBuilder, Node parent,
String fragment) throws IOException, SAXException {
Document doc = parent.getOwnerDocument();
Node fragmentNode = docBuilder.parse(
new InputSource(new StringReader(fragment)))
.getDocumentElement();
fragmentNode = doc.importNode(fragmentNode, true);
parent.appendChild(fragmentNode);
}

Both approaches work just fine. I guess my colleague will not complain anymore 🙂

Got Confused by GML and KML

Posted June 30, 2009 by wangx49931
Categories: IT

Tags: ,

I am fairly new to both GML and KML. Yesterday, one of my colleagues asked me to convert a GML file to a KML file with very basic information, which I did and was successfully loaded into google earth. After all that, I started wondering why google earth doesn’t support GML, and what is the difference between GML and KML. I did a quick search and found a couple of articles talking about this topic. Basically, GML describes the geometries themselves while KML describes how to display them. They are complimentary, not competing. But I still can’t understand why google earth doesn’t take GML and create a basic layout for all the objects. Well, might be just my ignorance, or my colleague wanted to test my programming skills 🙂

Waiting for the results of Pioneer Smart Grid Projects

Posted June 30, 2009 by wangx49931
Categories: Energy

Tags:

A recently published article (http://www.spectrum.ieee.org/energy/the-grid/smart-local-electricity-grid/) really makes me thinking about what we can expect from the pioneer smart grid projects. SmartGridCity and Smart Grid Miami are two top hits in my mind. Except for technology experiments and validation, there are more we can expect from those projects.

First of all, a question that has been haunting me for a long time, how much efficiency can be improved by introducing all kind of technologies, might be answered with some solid data and facts.

Second, customer behaviors. A good understanding of how customers respond to the new concept might be revealed. An interesting point is that SmartGridCity selected Boulder, Colorado as the demo city whose citizens are believed to have “greener” mindsets. A comprehensive study on customer behaviors based on their education background, occupation, income, etc would provide a lot insights.

I have to mention renewable energy. Both projects involve renewable energy (primarily solar) and plug-in hybrid. This adds another dimension of difficulty, yet another interested and expected area. How the distributed resources all work together and the impact to the grid are something I would like to see badly.

Hopefully, these pioneer projects can provide utilities, engineers, customers, and legislators useful information.