The Elements of Operational Design May Be Used
Operational Design
Next-Generation Business Process Management (BPM)
Fred A. Cummins , in Building the Agile Enterprise (Second Edition), 2017
Enterprise Collaboration Network
Operational design focuses on how the enterprise actually works. This is more detailed than the management-level, conceptual model represented with VDML. The enterprise is a network of collaborations. Many of those are ad hoc and should be recognized for support and participation. Those that depend on the circumstances but can be characterized with roles, responsibilities, and exchanges of deliverables can be designed for case management. Of course, the best understood and recognized collaborations are the prescriptive, repetitive processes primarily involved in the actual delivery of products or services.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128051603000041
Business Building Blocks
Fred A. Cummins , in Building the Agile Enterprise (Second Edition), 2017
Capability Unit Management
There are four primary aspects to management of a capability unit: (1) Capability method design, (2) management of resources, (3) management of operations, and (4) value optimization. These aspects may be all managed directly by one organization unit, or some aspects may be delegated or acquired from other collaborations.
Capability Design
The operational design of an activity network is the responsibility of an organization unit that is the capability method owner. This will be separated from responsibility for performing the capability method when there is a need for multiple organizations to perform it. For example, a budgeting capability method may be defined by the accounting organization for use by all organization units. This can ensure policy compliance, security, or operating consistency. This may limit the ability of a provider organization unit (that performs the method) to innovate and improve its operations.
Resource Management
A parent organization unit, separate from a capability method provider, may have responsibility for resource management where there is an opportunity for economies of scale or workload balancing among similar capability methods. Typically, this will be a capability unit organization. The manager of resources owns stores. Consumable resources are used from a store and replenished by input deliverable flow. Personnel and other reusable resources are obtained from pools (a specialized store). These resources are assigned from their pool when needed and released when no longer needed. Not all personnel and resources for a capability are necessarily owned by the same organization unit, but their store must be accessible by the capability methods that need them.
Operational Management
Operational management focuses on the day-to-day operation of a capability method. The operational manager is in charge of the work being done, supervises the participants, and resolves problems. Generally, this will be the capability unit that offers the capability.
Value Optimization
Optimization of a capability is based on its impact on the value streams that use it, the values it contributes, and the importance of those values to the value stream consumers. Typically, there will be a trade-off between speed, quality, and cost. For some value streams, speed may be important where cost may be most important for other value streams. Since shared capability methods will impact multiple value streams, it is important that the impact of changes be evaluated from an enterprise perspective.
Consequently, capability service level agreements should include measurements of value contributions, and consumers of a capability service should monitor compliance, particularly for those value types that are important to their value stream customers.
If a capability method uses another capability method that makes a change affecting a value measurement, the value measurement change will show up in the value measurement of the calling capability method and any methods that call it, directly or indirectly. This propagation of effect works in the VDML model, but it may be more difficult to observe in business operations. This reinforces the importance of monitoring for compliance with a service level agreement.
Capability Outsourcing
An outsourcing relationship will take a somewhat different form. Outsourcing is the use of an external entity to provide a client with external services that would otherwise be provided by an internal capability method, capability unit, or complementary set of capability methods.
Analysis of capabilities should include capabilities that are outsourced even though the enterprise does not own the elements of those capabilities. The outsourced services must be integrated with the enterprise operations—they contribute values that must be considered in the resulting value propositions. An outsourced capability can achieve greater economies of scale and may be more scalable to respond to changes in seasonal demand or market share. Capability should be distinguished from competency where a competency of an enterprise requires that the enterprise own the elements of the capability, usually for competitive advantage. A capability should not be outsourced if it is important to differentiation in the marketplace or if there is no competition between outsourcing providers.
The enterprise will not have control over the implementation of an outsourced capability and must rely on marketplace competition to control the cost and performance of the services. The typical purpose of outsourcing is to realize economies of scale and scalability that cannot be achieved within the enterprise. As the enterprise organization is transformed to a CBA, it is important to consider outsourcing as an alternative to the transformation and consolidation of existing capabilities.
The exchange with an outsourcing provider will incorporate one or more capability methods (services) into one or more value streams. The outsourcing provider can be viewed as a capability unit with multiple, complementary capability methods. Depending on the nature of the outsourced capability, the delegations may be tightly coupled, the same as for an internal capability or they may be loosely coupled, asynchronous deliverable flows.
In general, the focus will be on the service interfaces and level of service agreements. The outsourcing provider implementation will be a black box, preserving the option of the provider to change its implementation to address new business challenges and opportunities.
There may be other exchanges through business networks for management of the outsourcing relationship and payment for services. Managers within the enterprise do not control the resources or the operations of the outsourced service units. The enterprise must manage the services on the basis of a service contract, costs, and performance metrics, along with assessment of the satisfaction of internal users with the outsourced service.
The service interfaces require close attention. The interfaces are more difficult to change because the same services are being used by other enterprises. In addition, all requirements must be reflected in the interface specification and service level agreement; otherwise, there is no basis for corrective action when the service does not meet expectations.
The service interfaces should be based on industry standards, if available. The enterprise should be able to switch to an alternative service provider if the current provider is not meeting expectations. Furthermore, the ability to switch to alternative services ensures competition between service providers to drive improvements in cost and performance.
Obviously, if the same services are available to competitors, they cannot be a source of competitive advantage. At best, outsourcing moves the enterprise to a best-practices level of performance. At the same time, the management of the enterprise does not have the burden of managing the implementation or ongoing operation of the service, although it is important for enterprise management to measure performance and enforce service agreements.
The risks and benefits of outsourcing are outlined in Table 3.1.
Table 3.1. Risks and Benefits of Outsourcing
Risks | Benefits |
---|---|
|
|
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012805160300003X
Design Space
Milan Guenther , in Intersection, 2013
Operations

In one particular project, design teams explored operations within shared service centers. Based on research of people's activities and in close collaboration with experts in business processes, the design team reshaped work practice, procedures and tools according to new operating models in one SAP service center. In another setting, designers and researchers worked closely with the SAP Human Resources group to define a career development process to be applied across the company. In both cases, design solutions translated strategic goals into transformation projects, in a process of active collaboration with the people impacted by the redefined processes.
In projects reshaping operational design practice, a deep understanding of the work and a mapping of processes, activities and functions of the people involved enabled significant performance improvements and a better work experience. By helping employees to solve their own task problems efficiently, this design work directly contributes to SAP's objective of increased daily productivity.
We have found to be successful at creating strong user experiences, we must deeply engage with the surrounding business processes-well beyond the usual framework of the User Experience practitioner.
S. Kirsten Gay, Director User Experience Consulting at SAP
When attempting to improve enterprise efficiency and employee productivity, it is crucial to focus not only on well-engineered solutions but also on emotional value. What's fun gets done! As such, User Experience strategy, research and design are not an option; they are a necessity for developing economic value and company success.
Dirk Dobiéy, Vice President Knowledge Management at SAP
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012388435050007X
22nd European Symposium on Computer Aided Process Engineering
Marta Moreno-Benito , Antonio Espuña , in Computer Aided Chemical Engineering, 2012
2 MIDO for batch process synthesis and operational design
Integrated batch process synthesis and operational design involves the selection of the required processing tasks and their sequencing to transform raw material into final products, together with the definition of dynamic control profiles, batch stages constituting each task, and their assignment to equipment units. Particularly, the synthesis decisions covered in this work are the equipment configuration (Yconf), selected equipment pieces, task-equipment assignment and number of batches, all of them responding to a qualitative nature. Besides, the operational design, understood as the feed-forward control to define the batch process model, requires the use of dynamic decision profiles, such as input and output flow rates (Fj,in and Fj,out) and temperature (Tj,ku ) at each unit j and batch stage ku, as well as time-invariant decision variables, like processing times at each batch stage k of the plant. All these decisions can be gracefully modeled combining the use of logics in general disjunctive modeling, and the representation of discrete events and differential equations in multistage models. The resulting MLDO may be afterwards relaxed into a MIDO.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444595201501251
26th European Symposium on Computer Aided Process Engineering
David Fernandes del Pozo , ... Ingmar Nopens , in Computer Aided Chemical Engineering, 2016
3.3 Microreactor modelling
The microreactor model needs to capture the relation between the kinetics, the contacting pattern and the operational-design requirements to obtain the species modeling. For multiple microreactor systems, the ideal plug flow microreactor model (Eq (2)) is largely applicable when the following criterion, shown in Eq (3), is met.
(2)
(3)
With Ï„i being the residence time, V i the volume, − r [S] the reaction rate, Bo the dimensionless Bodenstein number, d the characteristic length, u the average inlet velocity, D the Taylor dispersion coefficient, the molecular diffusion coefficient, and β a geometrical factor. It should be noted that the Taylor dispersion coefficient in the radial direction should not include the convective term. The (bio)catalyst configuration is also important in the modelling procedure as it specifies the contacting pattern between the fluid, the reactants, the products and the catalyst. Moreover, mass transfer effects cannot be neglected by default in microreactors (Walter et al., 2005) and it is recommended to take such into account in the modelling procedure as well. For instance, proper dimensionless numbers like the second Damköhler can be used to help to detect mass transfer problems.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444634283502095
European Symposium on Computer Aided Process Engineering-12
Irene Papaeconomou , ... Sten Bay Jørgensen , in Computer Aided Chemical Engineering, 2002
5 Conclusion
A general methodology that allows the generation of operational sequences/routes for batch operations has been developed. The operational design of batch processes has been tested for the special case of a batch reactor. All the alternatives generated by the algorithm have been verified through dynamic simulation and found to be feasible. Together with the generation of feasible alternatives, the algorithm also generates the necessary operational information that can be directly used for purposes of verification (experimental and/or simulation). The optimal sequence (at least local) of operations can be found among the generated feasible alternatives. Within the solution space considered in the case study, with respect to minimization of operational time (without taking into consideration the associated utility costs) alternative 7 appears to be the optimal, while with respect to minimization of utility costs (without taking into consideration the operational time) alternative 1 appear to be the optimal. However, if one's aim is to find the minimum time and the minimum utility costs, then since there is obviously a trade off between these competing objectives, a criterion needs to be established before an appropriate selection can be made. The methodology, the algorithm and the associated computer aided tools provide a systematic and integrated approach to the solution of problems involving batch operation sequence and verification. Current work is investigating batch operations involving mixing, reactions and separations in the same sequence in order to achieve a desired product.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S1570794602800761
Failsafe Software Design:
Jeffrey M. Sieracki , in Mission-Critical and Safety-Critical Systems Handbook, 2010
3.3 Verification and Redundancy in the Implementation Process
Relying on encapsulated functionality, as discussed previously, dictates thorough component module testing. These critical paths become part of the design and should be traceable elements in the test plan, just as requirements and specifications documents lead to the core operational design and operational test plan.
To the extent possible, meet specific needs with specific traceable code. In some cases, this may be a watchdog timer running on a separate thread; in others, it may mean encapsulating hardware and sensor interactions; and in still others, it may be as simple as laying out code so that a master loop clearly and unavoidably checks and confirms hardware status on each pass-through.
Ideally, each software mitigation step listed in the RA/HA should be traceable to a specific, encapsulated code segment. This will reduce risk of unforeseen interactions, simplify verification, and increase the likelihood of delivering a robust, operational system.
In practice, not every aspect of every mitigation step can be encapsulated. For example, start-up code needs to make calls to initiate certain modules. Operational sequence code may also need to make specific tests to take advantage of the encapsulated aspects—such as making the laser safe before opening the door. As discussed earlier, however, a certain degree of redundancy can help simplify verification and ensure compliance with required procedure.
Non-encapsulated steps need to be conveyed clearly to code designers and verified as a matter of course in code reviews. Permitting and encouraging strategic code redundancy where safety- and mission-hardware interaction occurs not only aids the reviewer's work, but also provides a safety net. It is certainly possible to achieve Six-Sigma level verification without doing so, but the potential for hidden bugs in complex modern software is so high that the belt-and-suspenders approach of targeted redundancy will almost certainly be a safer and more cost-effective plan.
Targeted redundancy is not synonymous with bloated code. Repeated elements are limited to calls that verify critical hardware status, and the redundancy occurs in that these calls may occur in more than one subroutine that may be part of a sequential code sequence. The hardware functions should be encapsulated and the operational code clean and readable. Old-school software efficiency hounds (the author among them) can take solace in the reality that modern chip designers often must include a high percentage of entirely redundant circuit regions on their silicon dies to compensate for frequent failures in the manufacturing process. A high mission success rate takes precedence, and carefully applied redundancy is ultimately efficient and cost effective.
Version control, test and control plans, and linked verification of the elements in the RA/HA matrix are the means by which safety- and mission-critical elements are traced and ensured in the final product. The process documentation often feels like a burdensome evil when an engineer would rather get down to coding, but it is ultimately a powerful tool for quality assurance. Taken with a positive view, the process can be leveraged by the critical-system engineer to make implementation a very targeted and efficient process.
Other elements of redundancy are becoming standard fare in modern development. A generation ago, programmers tended to be quite curmudgeonly about their peers looking over their shoulders. Today code reviews are ingrained in the development process. By subjecting each function call and line of code to multiple pairs of eyes, the chances of catching and identifying bugs and obscure potential failure modes go up enormously. Code reviews should consider style, insofar as keeping code readable and conforming as necessary to maintain consistency across a work group. However, the emphasis of the review should be on examining whether a particular code segment meets its design purpose and avoids hidden flaws. The best code reviews include multiple team members stepping through the code visually, not only looking for bugs but also challenging each other intellectually with various use cases and code entry conditions to determine whether something was missed.
Adding in safety–or mission–critical crosschecks can become natural. If reviewing encapsulated critical code, make sure that it meets its design parameters, that it is tight and straightforward, and is strictly independent from other systems operation insofar as possible. If reviewing general operational code, have the RA/HA checklist handy, ask risk-associated questions as you go, and make an explicit pass through the checklist once to evaluate that each condition is either addressed or not applicable.
Some shops have introduced more extreme levels of implementation code production redundancy with good effect. These include ongoing multiple programmer integration and even paired "extreme" coding teams that work side by side on every line. The success of these ideas lends more and more credence to the cost effectiveness of carefully applied redundancy when it comes to saving time in achieving a reliable result.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780750685672000020
12th International Symposium on Process Systems Engineering and 25th European Symposium on Computer Aided Process Engineering
Hiroshi Osaka , ... Tetsuo Fuchino , in Computer Aided Chemical Engineering, 2015
1 Introduction
In the chemical process industry, it is well accepted that operational documents such as operating procedures and training manuals are often inadequate and ineffective. As a result, it is widely recognized that these inadequate and ineffective operational documents contribute substantially to the occurrence of plant downtime and often costly and dangerous industrial incidents.
The key reasons for these inadequate and ineffective documents are that the original operational requirements from plant owners are not clear, and the fundamental design intentions and design rationales by process designers are not incorporated into these operational documents. This is because the operational design for chemical processes is performed by the process designers during the earlier process and plant design phase, and, (in a largely separate and much later exercise), operational documents are created by the plant owners. This decoupling of design and operations means that these documents don't cover all the necessary operation modes and they don't support the design intentions and design rationales.
This paper presents a logical methodology and data structure whereby the key operational design output is carried forward into the operational documents producing efficient, easily understood and flexible procedural documents that will ultimately result in safer and more efficient operations.
To achieve this, first, we identify typical data elements for the operational documents by referring to generally accepted design guidelines. Next, we identify the operational design output during the process and plant design phase by utilizing a systematized business process models for plant lifecycle engineering (Fuchino et al., 2011). In particular, with regard to the description of processes, operations and equipment, we apply the design philosophy of ANSI/ISA-88(ANSI/ISA-S88.01, 1995) to the methodology. (ANSI/ISA-88 is a widely accepted industry standard addressing batch process control). Finally, we integrate these techniques to construct effective, structured operational documents. Documents structured using this methodology reflect all the underlying design intentions and design rationales and operation modes and, at the same time are easier to use and easier to maintain. We propose that this will substantially contribute to safer and more efficient operations for chemical processes.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444635785500232
29th European Symposium on Computer Aided Process Engineering
Georgios P. Georgiadis , ... Michael C. Georgiadis , in Computer Aided Chemical Engineering, 2019
5 Conclusions
This work presents the optimization-based production scheduling of a large-scale real-life food industry. More specifically, all major processing stages of a canned fish production facility have been optimally scheduled. The industrial problem under consideration illustrates significant complexity, due to the mixed batch and continuous stages, each having numerous shared resources, the large number of final products and the various operational, design and quality constraints. This make-and-pack structure (one or multiple batch or continuous processes followed by a packing stage) is typically met in most food and consumer packaged goods industries, hence, the presented solution strategy can be easily implemented in other industrial problems. It has been shown that the suggested solution strategy can optimally schedule even the most demanding weeks of the examined industry in acceptable time, leading to reduction of overtime production. The proposed strategy can be the core for a computer-aided scheduling tool that can facilitate the decision-making process for the production scheduling of food industries. Current work focuses on the introduction of cost related objectives, as well as, the incorporation of uncertainty in product demands.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128186343502174
The Agile Enterprise
Fred A. Cummins , in Building the Agile Enterprise (Second Edition), 2017
Business Collaboration Management
A collaboration is interactions of a group of people (or other collaborations) to achieve a desired result. A collaboration may specify the roles of participants, the activities they perform, and the deliverables they exchange. A collaboration may employ technology for coordination of performance or certain functions, and it may engage other collaborations to obtain supporting services. BCM expands the scope of BPM, a management discipline traditionally focused on management of prescriptive, repeatable business processes. A traditional business process is effectively a collaboration with prescribed activities and flow of control.
Scope of BCM
The scope of BCM extends from the interactions of independent business entities in the business ecosystem to the interactions of managers and knowledge workers, and to the ad hoc work groups and informal exchanges of information that are a necessary part of successful business operations. A collaboration engages participants to achieve a shared objective. The enterprise is a network of collaborations.
Thus BCM includes adaptive processes and ad hoc activities of people working together in joint efforts. So, a collaboration to fulfill an order may have an application to receive and validate orders, persons to resolve errors, authorize credit, accept changes, and resolve customer questions, along with delegations to other collaborations that manage production, manage packaging and shipping, and perform billing and collections. As we dive down into these delegations, the collaborations will involve more people and collaborations. The operational design of these collaborations may be modeled as business processes where each may be prescribed, adaptive or ad hoc.
In addition to such production collaborations, the management hierarchy is essentially a hierarchy of collaborations, projects are collaborations of project team members, committees are collaborations, interactions with business partners or customers are collaborations, and professional groups are collaborations. The collaboration concept is a consistent basis for modeling all business activities and interactions.
BCM expands the scope of BPM to potentially address all forms of collaboration, i.e., ways people, groups of people, and machines work together to achieve a shared purpose. This expands the ability of practitioners to consider all roles, interactions, and working relationships that are essential to the operation of the business but may be outside the scope of the management hierarchy and prescriptive processes.
Collaboration Network
In a traditional business organization, people are assigned roles in the management hierarchy. To implement substantial changes or solve business problems, ad hoc project teams or task forces may be assembled bringing people with complementary skills together. These teams and task forces often do not appear in any organization charts. In addition, there are many less formal working relationships that are essential to the operation of the business. The full operating structure of the business is not visible from the typical organization chart.
These cross-organizational, ad hoc, and informal relationships have become more pervasive and are often essential to the delivery of customer value. This change in organizational patterns is, at least in part, a result of automation of rote business operations leaving the remaining work as that of knowledge workers who must take actions based on their expertise rather than prescribed repetitive processes. In order to understand the full operation of the business and the contributions of individual employees, the organization must be represented as a network of collaborations. These interactions may be face to face or by telephone, fax machines, email, text messages, computer applications, or even social media. Important business activities may include essential, informal exchanges that result in intangible, informal artifacts or communications.
Management Hierarchy
As noted above, the management hierarchy is a hierarchy of collaborations. These collaborations are distinct from other collaborations because they are responsible for the management of assets, including money, personnel, facilities, intellectual property, and resources that are used or consumed, as well as the use of services engaged to support or maintain capabilities.
The manager of an organization unit is a participant in the organization unit collaboration as well as a participant in the parent organization-unit collaboration. There may be a variety of informal collaborations among members of an organization unit, and there may be formal processes specified as capability methods in a business conceptual design model. The organization unit assigns personnel and provides other resources to collaborations to do work.
Knowledge Workers and Managers
Today's business organizations involve many knowledge workers—workers whose activities are relatively self-directed and rely on their knowledge and experience. This is a result of automation of most rote activities. Managers are also knowledge workers. These self-directed employees engage in collaborations to develop plans, solve problems, and coordinate work. Technology is now available to support the planning and coordination of these collaborations. CMMN, a specification for modeling this support, is discussed in Chapter 4.
Business Network
The enterprise also collaborates with its business partners, customers, and other external entities. These interactions are modeled as business networks where the different business entities are participants in defined roles, and they interact based on cooperative process specifications—defined as Choreography in BPMN. These processes are not controlled by a shared system but depend on each participating entity following the prescriptions of the choreography.
These collaborations link the enterprise and its internal processes to the external business entities for exchange of deliverables and values. The value exchange specifications support analysis of the viability of these relationships.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128051603000016
The Elements of Operational Design May Be Used
Source: https://www.sciencedirect.com/topics/computer-science/operational-design
Post a Comment for "The Elements of Operational Design May Be Used"