MockMotor, a Mock Server For the last few years, I’ve been working on a mock server application, MockMotor. Why? Testing with mocks is essential in any SOA environment, including the ones based on OSB. As the number of services in the system grows, it becomes harder and harder to line up all required backends for a round of testing. Seeding those backends becomes a tedious and error-prone work. Troubleshooting those backends and their inter-dependencies becomes an increasingly long process.
Upload the old and the updated WSDLs to WSDL Diff and see what’s impacted. A WSDL Has Got Updated A vendor has just sent you an updated WSDL. What’s going to break because of the changes in it? This question is very common for any system that is built on top of SOA. You typically have a few consumers for a WSDL, or a few services that use the same WSDL, and now that WSDL needs to be updated.
Make your XMLs smaller by eliminating duplicate namespaces. Why XML Size Matters Size of payload XML can make a service slow in few ways: Service parses the XML slower Service transforms the XML slower Service serializes the XML slower before sending them over the wire The XML travels over the wire slower If the XML is stored (in cache or on disk), its serialized version takes longer to save or read Generally, we want our XMLs to be smaller if we need our services be faster.
Reproducible testing of web services requires mock services. Meet MockMotor. How Do You Test Your OSB Code? A composite and often even pass-through service in OSB requires a testing to prove it works. You may need to test multiple scenarios, including but not limiting to: Account-specific logic Incorrect inputs Backend error conditions Partial failures and recovery Retries Performance under load How do you do it? Web Services Testing is a Common Problem Every time we create web services (OSB or standalone, SOAP or REST, no difference), we have to test them.
Increasing HTTP chunk size with -Dweblogic.Chunksize=65500 Some clients, surprisingly, have a performance issue parsing chunked responses. In one of the recent cases, a client code took almost 500ms to parse chunks in a 1Mb JSON response. OSB (Weblogic, in fact) can decide to send a larger payload in chunks, despite there is usually no benefit in it for internal networks. We cannot disable it as it’s a required option for HTTP 1.
How to configure 2-Way SSL (aka Client Certificate) at Biz in OSB This post describes how to configure two-way SSL for outbound connections in Oracle OSB. In other words, we’re going to make OSB calling a backend service that requires a client certificate to authenticate the connection. If you’re looking for information on how to make OSB accepting client certificates, it is a wrong post then. That would be the inbound two-way SSL connections.
GenericParallel 1.5.0 now allows to use non-XML content, as well as HTTP POST, PUT, GET, DELETE and relative URIs. Download GenericParallel 1.5. OSB’s split-join is WSDL-only facility. Your message must be a SOAP one to be routed anywhere from split-join. This is quite a limitation. With the rise of REST services we do need to call multiple REST services in parallel. GenericParallel, too, was only accepting SOAP and XML messages.
This is how to collect all stats from an OSB 11g domain into a CSV file: java -jar readosbstats.jar -c https://126.96.36.199:8002/sbconsole -u monitor -p password1 You can export OSB stats via JMX. This is not hard, and there are code samples all over the net. Unfortunately, JMX requires an administrator account. What people who only have a monitor account (developers, architects, BAs) supposed to do? Here’s an answer: ReadOsbStats is a small free utility that doesn’t require admin credentials but collects the stats nontheless.
You should realize that not all services are equally useful. Some, like submitting orders, are directly generating revenue to your company. Some others, like getting orders history, while important, can be sacrificed to let orders get submitted. When the system is under a higher then usual load, how can we dynamically shutdown non-essential services to release more resources to the essential ones? Our Playground: the Orders Service For our experiments, I’m going to use a mock service that has two operations:
Offload the currently unused in-memory XMLs to a persistent storage, like BPEL does. Download examples. Slow backends can kill the JVM if they are used in a composite service. The data are accumulated in the service while the backend service is taking its time to respond. Make the service slow enough and the data big enough, and the heap will be all consumed up. Can we do something about it?
A 10000-ft view of an OSB domain, in 5 minutes, with TransitMap. Is it possible to understand the inter-connectivity of a complex OSB domain? Is walking through the code step by step and making notes, both on paper and mentally, the only way? Can we generate a diagram of the domain? I have created a utility that visualizes an OSB domain or a selected subset of projects in one, to help me in figuring out some of the most complex projects I have to work with.
How to store-and-forward messages in OSB and never lose them. In asynchronous mode, a caller disconnects before OSB delivers the message. If there is a temporary error, OSB has to save the message and retry the delivery later. How to implement it correctly? What typical mistakes should we avoid? Requirements for Reliable Messaging Let’s formalize our needs. We have three parties: a message producer, OSB and a message consumer.
How 2 standard services - JMS and SMTP - may fail when they get connected. Everyone knows JMS proxies. They are simple - just specify the URL and the proxy will read the messages from a queue. Everyone also knows SMTP Biz services. They are, too, very simple - just provide the email text and a few required SMTP headers, and the email is sent. There is barely any space to make a mistake, right?
Default SSL implementation is deprecated. Upgrading to JSSE however can make your oldest services fail. Update: eventually we installed nginx as a SSL/TLS proxy between OSB and the outdated backends. We could control all properties of the TLS connection from nginx downstream, including what SSL/TLS protocol to use, what certificate to present, and what ciphers are available. Removing the direct dependency this way, we were able to upgrade OSB and backend systems separately, each on its own schedule.
Using Unit-of-Order with OSB. Download the test project. Unit-of-Order (UOO) is an Oracle (BEA) extension to standard JMS. It enforces the order of messages with the same key so that the messages are consumed in the order they were added to the queue. This functionality works in a cluster, too, by assigning each UOO key to one and only one managed server. When Ordered Updates Are Needed Suppose I update my phone number on a site.
Dynamic routing to other projects’ pipelines and flows using thunks. OSB 12 is out. The dynamic routing functionality in OSB 12 has been extended with the ability to call pipelines and flows (a.k.a. split-joins) directly. A direct call to a pipeline is better for performance. A direct call to a flow is just more convenient than a call via a Biz with Flow transport: you do not need that Biz now, at all.
How to place dependency libraries into a JAR and hide them from the callout dialog. Sometimes we have to use OSB’s Java callouts. On many occasions the callout Java code requires the use of external libraries (dependency JARs). Here comes the problem: how to deploy these dependency JARs? Should we deploy the dependencies as JAR resources right into the OSB project? Or should we package them into the callout JAR itself?
How to collect OSB per-operation statistics for a JSON proxy. Download the full example. See other posts about OSB & JSON: Why JSON Does Help Direct Proxy Performance How To Build a JSON Pass-Through Proxy in OSB JSON Proxies: Inspecting & Modifying The Payload (Special thanks to Saeed Awan for the reference implementation.) In one of my previous posts, I demonstrated how to implement a simple pass-through JSON proxy.
To get fault details in split-join’s CatchAll, call an intermediary proxy. When split-join invokes an OSB business service and that call fails, CatchAll does not help. Instead of detailed information of what went wrong, the fault variable contains only a single element from the BPEL extensions namespace. Utterly useless. At the same time, if the error is re-raised with the Re-Raise Error block, the code calling the split-join gets the correct description of the fault.
This is a step-by-step guide on how to implement a pass-through JSON proxy in OSB. Download the full example. See other posts about OSB & JSON: Why JSON Does Help Direct Proxy Performance OSB and JSON Proxies: Gathering Statistics JSON Proxies: Inspecting & Modifying The Payload JSON is cool, but OSB doesn’t recognize it as a first-class data format. OSB cannot validate it, cannot transform it, and cannot even add the smallest security token to the JSON payload.
The failure ‘Application “com.bea.alsb.core.ConfigExport” could not be found in the registry’ can also be due to insufficient disk space on the build partition, and generally any resource shortage. I have already written about the “ConfigExport cannot be found” problem and how to solve it. Since then I have encountered the same issue again. Despite getting an identical error message, the root cause was totally different: a lack of disk space on the build partition.
XML validation requires a lot of CPU and memory. The connected systems are tested in DIT, SIT and UAT and they are unlikely to produce invalid XML. The backend systems perform validation anyway. Consider validating in test environments only, and running without schema validation in production. Use business validation instead of schema validation. An ideal ESB layer is invisible. It should not add any overhead to the round trip time.
A service’s domain-wide throttling value may be different from what you have configured for it. I have mentioned already that the throttling value assigned to a service on a specific managed server is calculated as the throttling value from the configuration divided by the number of managed servers, and rounded up if not whole. There are curious extensions from this statement. Servers May Have Unequal Throttling (e.g. 6, 6, 4) Huh?
Sharing Max Threads Constraint between multiple Work Managers causes them to share the same thread pool. It is natural to think of a Work Manager as a thread pool, and of Max Threads Constraint as a property of that pool. Therefore it is natural to create, say, 3 Work Managers and assign each of them a Max Thread Constraint of 30 – the same named constraint (e.g. “MaxThreads30”). The problem is that it’s wrong.
Throttling value, unlike the max threads constraint, is divided equally between all managed servers in the domain. Throttling allows you to limit the number of concurrent requests to a backend server. What is confusing is that the value is not per managed server, but for the whole domain. This makes sense, if you think of this use-case: You have a backend service that can only serve 20 connections at a time.
Most services are having all of their payload’s information in the soap:Body. Some though include meta-information located in the soap:Header. Surprisingly, Split-Join simply drops the soap:Header values. The documentation says it should support them, but I failed to make it work. What to do if you got to pass soap:Header values through Split-Join? How To Pass SOAP:Header In Plain Split-Join: A Hack If you have to use the naked Split-Join facility, I pity you.
Use throttling in business services to protect the OSB from stuck threads. Throttling allows us to limit the number of outgoing requests currently in progress. At first, I thought it would only be useful to protect shaky downstream services, which crash if hit with too many concurrent requests at once. And throttling is useful for that. But not only for that. It is also useful to protect the OSB itself from too many stuck threads.
Use a marker group to toggle extended request validation in SIT/DIT but not in PROD. In DIT and SIT, it makes sense to validate every detail about the incoming request and the outbound response. Developers are making mistakes populating the data, outdated WSDLs are used, all kinds of things go wrong. The earlier the defects are found, the less expensive it’s to fix them. Validation is a must.
OEPE builds take a long time and sometimes, for large domains, fail with OutOfMemory errors. Turns out this is not a heap OOM, but a PermGen space one. [java] Exporting to /srvrs/esb_tp/build/output/paypal2/paypal2-sbconfig-140122-212239.jar... [java] Exception in thread "Worker-4" [java] Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "Worker-4" [java] Exception in thread "Worker-1" [java] Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "Worker-1" The setup is complicated by the fact that the launcher started from build.
‘Application “com.bea.alsb.core.ConfigExport” could not be found in the registry’ is due to some artifacts left in the build directories from the previous build. Clean them up and the error will be gone. Did you have your OSB build failed for no reason with this puzzling message: Application “com.bea.alsb.core.ConfigExport” could not be found in the registry. (No? You’re lucky, you can enjoy life; the rest - please read on) !ENTRY org.
Business Service call may take much longer than the timeout specified for that service. OSB business services have connect and read timeouts to prevent indefinite waiting for the backend service. Just set it to, for instance, 15 seconds and the call will never last longer than that! … Right? Nope. It can take forever. First, let’s realize what “read timeout” really means. Read timeout is not triggered when the request is not completed 15 seconds after successful connect.
For direct OSB proxies, passing JSON content instead of XML will improve the overall end-to-end performance due to much smaller serialization and deserialization overhead. See other posts about OSB & JSON: How To Build a JSON Pass-Through Proxy in OSB OSB and JSON Proxies: Gathering Statistics JSON Proxies: Inspecting & Modifying The Payload In my previous post I attempted (and failed) to improve the performance of a service with a large response by gzipping its payload.
For direct OSB proxies, passing XML content as GZip will not improve the end-to-end performance due to extra time required for gzipping and un-gzipping. There is a service I have to deal with which is a constant headache. The service returns a huge, 1Mbyte or more, XML with all kinds of information. The web application that calls the service experiences a very noticeable delay, which affects the user experience.