eCube Systems Announces NXTera 7.2 for Red Hat Enterprise Linux 8 (RHEL 8)

NXTera 7.2 now makes it easy for developers to migrate their Unix RPC applications to Linux with a modern agile development and Web Services environment.

eCube Systems, a leading provider of middleware modernization, integration and management solutions, announced the release of NXTera™ 7.2 High Performance RPC Middleware capable of running both client and server applications on the Red Hat Enterprise Linux Platform. NXTera is the replacement middleware for Entera and continues to expand the platforms upon which it can run. NXTera now supports a variety of linux platforms including Suse, Centos, Ubuntu and Redhat in addition to the existing Unix and Windows platforms from NT, 200X Server, XP, 7, Vista and 10. With the latest RHEL8 platform, NXTera middleware fully supports in house application on most platforms with naming services through RPCbroker and database access engine JDBC_START. With the support for Webservices connectors for the applications and enhancements to the broker to support the internet, NXTera applications can now be run on the Cloud.

Click here to read more

eCube Systems and Lima-Thompson Consulting Group Announces a Joint Venture to Develop an Advanced Software Security Platform

eCube Systems and LTCG will partner together to develop a Dynamic Security offering later this year.

eCube Systems, LLC, a provider of the NXTware hybrid infrastructure platform and legacy modernization tools and services, announced that it will enter into a joint venture agreement with Lima-Thompson Consulting to develop and market advanced security applications to Fortune 500 companies. "eCube is pleased to announce this joint venture developing an enterprise security application" says Kevin Barnes Managing Partner at eCube Systems." With our expertise in Hybrid Infrastructure Platforms and LTCG’s in depth expertise and knowledge of computer security issues and needs, we are confident that we can build a highly advanced, scalable and portable enterprise security solution."

Click here to read more

eCube Systems Announces NXTera 7.1 Cloud-Enabled Entera RPC Middleware Certified on Suse Linux Enterprise 12

NXTera 7.1 now makes it easy for Solaris, AIX, HP-UX and Windows developers to port their 3GL and 4GL applications to the Cloud on SUSE Linux with a modern agile development and Web Services environment.

eCube Systems, a leading provider of middleware modernization, integration and management solutions, announced the release of NXTera™ 7.1 High Performance RPC Middleware for SUSE Linux Enterprise 12. NXTera 7.1 is the official Borland sanctioned replacement middleware for Entera and includes modern tools for DevOps, advanced naming services with NAT support, JDBC database access for Entera servers, Eclipse workbench for COBOL, FORTRAN, C and C# language integration; and webservice enhancements to its generation of C, C# and JAVA services interfaces and clients. With the addition of SUSE LinuxEnterprise 12, NXTera now supports the broadest OS platforms in the industry and can make inter-platform migration or legacy modernization much easier. NXTera Workbench is now integrated into the NXTware platform for seamless, distributed agile development and DevOps functionality with NXTmonitor. New server stub generators for BASIC, COBOL, C, C#, FORTRAN and Java enable enterprise developers to create multi-tier, multi-language applications with both RPC and Web Services connectors. "As more platforms like Solaris, AIX and HP-UX disappear, developers turn to linux to meet their needs for agile development and NXTera continues to expand to different platforms”, says Kevin Barnes President and CEO of eCube Systems. “Extending the ROI of legacy applications to newer technologies and integration with new languages is key to helping drive IT innovation."

Click here to read more

VSI’s John Reagan Interview on GEM vs. LLVM for X86/64

The following blog documents a series of interviews with VMS Software Inc.’s John Reagan, who heads the compiler group. John was giving a series of presentations on leveraging LLVM for the new OpenVMS X86 port. After listening to his presentation in Malmo, Sweden, I decided to contact him and ask him to expound on his fascinating view of modern compiler technology and how they work. John has worked on various compilers at DEC, Compaq and HP and has a very deep background in many 3GL and 4GL languages and their tools.

Background

John spent 31 years working for Hewlett Packard as a Compiler engineer for OpenVMS, specializing on Pascal, COBOL, Macro-32, the GEM backend, and assorted other compiler-related projects. He is currently working for VMS Software, Inc. as a Compiler architect/developer for compilers and compilation toolchain for OpenVMS on Itanium and future platforms.

Q: I have seen your presentation at the Bootcamp and at the Connect Conference in Malmo earlier this year. I had a number of questions on the presentation and I will start there. So, in your presentation, you explained you worked on the development of GEM and it was created as a common code generator for the Alpha 64 bit RISC architecture with a target independent IR which is used by all the languages. You mentioned GEM and GEMIR – those terms were unfamiliar to me, so I had several questions: What is the difference between the two? Does IR stand for Intermediate Representation? Can you describe how this was done on GEM and how LLVM does it and how is this step is important for generating object code?

A: "Those are all good questions. To be clear, I wasn’t one of the original GEM developers. As a front-end developer, I used GEM, provided feedback to GEM, even worked with them to solve particular problems. I only became a real GEM developer when most of the original GEM developers were transitioned from Compaq to Intel. You asked about GEM; GEM is not an acronym. Back in the Digital days, there was a Prism project and Prism was going to be the follow-on architecture to VAX. It wasn’t done and Alpha was done instead and it was almost Prism. At the time, everyone got enamored with things that looked like jewels, so there was GEM and Opal and there are other names that were involved. GEM is not an acronym that had to do with Prism. So all of our VAX compilers were in this match of modules: different code generators, different approaches, different optimizers – some had great optimizers like the VAX Fortran compiler and some had almost no optimizers: the VAX COBOL compiler had no optimizer at all. So deciding from everything we learned on VAX, going to Alpha they decided: let’s write a common back end that everyone uses. We had some experience on VAX; you may have heard of the VCG or VAX Common Code Generator which was used by the VAX PL1, the VAX Ada and the VAX C compiler shared a common code generator. It wasn’t really common – it was cloned from each other and shared resources, but we knew as a compiler group we were going to have write compilers not only for the VMS, but we had to write compilers for MIPS Ultrix. That group came to us and said we want your Fortran compiler; it is a great Fortran compiler. We said how do we get our VAX Fortran compiler on MIPS Ultrix? You switch architecture and operating systems. GEM was invented as a multi OS, multi-target backend. It provides interfaces that deal with command line processes, with file reading, file writing, etc. So a front end could say I need to read my source file and on VMS it knew how to call RMS and on Tru64 or MIPS ULTRIX, it knew how to call those file systems. That part of GEM is dealing with environmental issues. The other part of GEM is this intermediate representation where you take the source code and you turn it into a…think of it as a high level assembler: I want to create some variables and I want to add them together or multiply them or do a test and a branch. If you dumped it out you would say this has an assembler feel to it, but it is a little more abstract. It knows about descriptors, it knows about loops, but it has almost no target architecture information in it at all. For some things like weird BLISS linkages, we had to have register numbers encoded in some linkages, but for 99% of the generated code its target independent. GEM took that intermediate representation and internally, depending on whether it was an Alpha or MIPS target version of GEM, it turned it into a tree, does optimizations of code hoisting, and common sub-expressions and loop unrolling, stuff that you needed to do for Fortran and we needed to make sure that all the tricks we played for VAX Fortran were also performed by GEM. GEM also had to the learn new tricks for Alpha; it would be horribly disappointing if we came up with this really fast Alpha chip and then find out the Fortran compiler really sucked and then gave you worse performance than your older VAX did. Having fast chips are one thing, but if you don’t have a good compiler, you are in trouble. So GEM does all that transformation, it knows how to write out the object information, because on different platforms it knows the object file is a different format. I look at GEM sometimes as Mr. Potato head; there’s different pieces inside of GEM; there’s the optimizer which is mostly target independent, but there are several code generators, there is one for Alpha, there is one for MIPS, there’s one for x86 32-bit for Visual Fortran. It knows how to write COFF files for Tru64 object file, it knows the Windows object format for the Windows systems, it knows how to write VMS object format, so when we build GEM for different targets, we just pick and choose the pieces we want and end up with a GEM that gets linked with the front end. The front end produces a symbol table and a generic sequence of the intermediate representation and tells GEM: here, go have fun and go generate code. What that enabled is that the Fortran compiler front end now can be the same Fortran front end for VMS Alpha as it was for MIPS Ultrix as it was for Tru64 Alpha as it was for linux on Itanium as it was for all those different targets and flavors that we had. It allows for the first time for COBOL to be optimized. COBOL now runs the same optimizer that Fortran runs through. The Alpha COBOL could now do strength reduction and loop unrolling. Our Alpha BASIC compile knows how to do pointer analysis. We had never done that before because it wasn’t worth the effort because no one was really looking for that type of performance on those languages, but you get all for free, right? That’s the advantage of that backend. Going to something like LLVM, we have the same advantage again. We were presented with the issue of how do we keep the existing VMS customers happy; they’re moving code forward, but I also want to modernize VMS and give you newer language standards for things like C or C++ and Fortran and good quality code for all the different types of X86 chips whether they come from AMD or Intel. LLVM has support for all the different types, not just one or two type of chips. On Alpha, there’s just two or three different flavors of Alpha chips and differences between them were pretty minor in most cases. Internally they were faster, but from an outside view they behaved pretty much the same. Go look on X86 and over the years there are dozens of subsets of different instructions on different chips. Some chips like one sequence of instructions versus another sequence of instructions so the optimizations need to almost know what chip they are optimizing best for. To try to keep track of that is a difficult job; it takes a lot people. Something like LLVM; there are hundreds of people around the planet that are watching that stuff, constantly tweaking it and keeping track of that so I don’t have to, right? We said we have all these front ends that generated this GEM intermediate language, how do we get that same front-end hooked to LLVM? We need those front-ends, because we have customers out there with millions of lines of BASIC, of Pascal, COBOL and Fortran, all those things – they expect our frontends – there’s no other place to go for those things. We could have made massive changes to each frontend to have GEM directly generate LLVM intermediate language, but why bother? They all generate a nice GEM intermediate language – pretty understandable, so we have written a GEM intermediate language converter. It takes the GEM intermediate language and generates the LLVM intermediate language. Now LLVM, being a compiler tool, this is not really rocket science. At this level for people in this industry it is the same thing: the LLVM intermediate language lets you record variables, which you can add two together and then talk about the results. There are 200 GEM intermediate language intermediate representation node types: add, subtract, multiply, shift left, shift right, bit fetch, bit store all the various flavors 75% of them are one to one mappings on something equivalent on LLVM. Add is add, and so on. There is not really much else you need to do. There are a few places that are a little different since GEM has some built-in knowledge about knowing how to build VMS descriptors that made it easy for every frontend to build VMS descriptors. LLVM doesn’t know about VMS descriptors, so the one GEM intermediate node that talks about building a VMS descriptor that the frontend generates, the converter has to generate a sequence of fifteen or twenty intermediates on LLVM, but pretty much filled the datatype in first byte, filled the class in the second byte, filled the length in the next word it is relatively straight forward and very mechanical. All we had to go through these several hundred GEM nodes to convert them all: just brute force. Just pound your way through it. For the most part that gets you 90% the way there.”

Q: So what about the language of GEM? Is it written in C?

A: “No, GEM is a mixture. It started life as all BLISS. Over the years, it morphed into different languages. There was a switch over when all new stuff was written in C, but there is a lot written still in BLISS. Of course, if you work on GEM, you have to learn how to do it in BLISS. Our converter that we just wrote is in C++, there’s no reason to do it in BLISS. But of course all of these front ends, the BASIC front end, the COBOL front end, the Pascal front end – all in BLISS. The Fortran front end is in C Of course the C frontend is in C, and the BLISS frontend is in BLISS. So being in our group, you still need to know how read and manipulate BLISS.”

Q: So LLVM – is it written in C?

A: “Yes , all in C – well, 99% in C++ and uses extensive C++ features. There are a few lower level utility routines that are straight forward C99 C. It is very object oriented in its interface to build the LLVM IR. You do have to go and learn to read classes and inheritance and all those things. Having a good working knowledge of C++ is a good thing to read the LLVM sources. Whether it is things like C++, Swift or Rust or other things in the industry on linux platforms, the LLVM IR has already been shown to work for a lot of different languages. I downloaded the LLVM source code and built it on OpenVMS Itanium to generate code for X86. That is what we are using for our cross compilers. There are hundreds of files that are part of LLVM that we compile. We had to make a few changes for VMS. I think right now it is under 500 lines across the entire code base. We’ve had a couple of extensions, for example in the unix world, every routine doesn’t have an argument count; in the VMS world, every routine expects an argument count passed in. So we’ve added some support to make sure every routine call has an argument count. That had to be done inside the code generator, not in the converter, that’s too high level for that. There’s some things we have to add for debug information; there’s some debug information for some of our languages like BASIC or PASCAL or COBOL or BLISS that still aren’t in the current DWARF standard. So we have added some extensions to make sure those additional types get produced that our Debugger expects. It is pretty much very mechanical and high level. We haven’t made any fundamental changes to LLVM because we want to pick up new versions of LLVM going forward. I don’t want make the mistake of snapshotting LLVM today and sticking with it for the next ten years. It will be out of date in six months. LLVM churns out new versions quite often to add additional optimizations, new chipsets support, code mitigations for security issues like SPECTRE and Meltdown. So we need to stay current with that code generator technology is very important. “

Editor’s note: This seems to be a good place to stop the first blog article. Coming up in the next blog of our interview: history of LLVM and futures.

Click here for part 2.

How can you extend the value of your company’s abandonware?

Companies that depend on applications that have been abandoned by the vendor whose technology upon which it is based is a serious problem that needs to be addressed. As we discussed in last year’s blog, many companies are already facing problems just keeping the developer skills necessary for keeping these kinds of applications going. In that blog posting, we used this picture to describe the progression of risk and return on investment (which may not accurately convey the risk or problems companies face): In some cases, the problem may be more acute since some of these legacy applications may be built on obsolete, third party technology, which has been abandoned by the vendor for one reason or another. This is what is known as “abandonware”. The vendor’s first step in this process is usually a publication of a support matrix which indicates the number of years the vendor’s software will be supported. In this way, the vendor can announce the successor product and “extended support” fees that kick in when the software reaches end of life. Many companies see this notification as a decision making tipping point for the keeping or unloading the application that uses the “abandonware” software. Assuming a readily available replacement in not present, does the company continue with the existing application and increase their budget for support or do they drop support entirely and support the application internally? This depends on many factors including the importance of the application, the reliability of the third party software, current hardware and software upgrade issues and compatibility with other parts of their enterprise. Usually the vendor gives its clients a few years notice, but in some cases it can be quite abrupt like a bankruptcy or acquisition. In either case, you can assume the vendor is preparing to unload or re-train its technical and support personnel rather quickly in anticipation of the end of life date. This will likely mean problem resolution and bug fixes with the vendor software will be delayed or curtailed. If adoption of a new system is cost prohibitive, then there are options to continue in the short term with the existing system. While freezing the application development is an option, maintaining continuity for the application can be accomplished with a balanced approach of services and careful maintenance. Depending on the application, abandonware can be sustained in the long term if you follow some helpful guidelines for legacy application support: Extended support – if extended support for the abandonware is available at a reasonable cost, this would be the best and lowest risk option. One caveat: you must make sure the vendor still retains enough expertise to support the product and fix any potential problems, otherwise the support is not worth the additional cost. Alternative support – if the vendor has no extended support or not able to provide adequate support, third party support companies like eCube Systems with the expertise in these legacy abandonware products and can provide support until the replacement system is ready. Enterprise Evolution/Legacy Modernization – employing an intelligent, phased approach to analyzing, replacing and modernizing the application component by component is a viable alternative to remove dependency on the abandoned software, and it should begin as soon as possible. By extending support on the existing system, you eliminate the immediate impact of change to your application; and by implementing a phased plan to modernize the application, you retain the continuity of the application while eliminating the dependency on abandonware over time. This will extend the return on investment while minimizing the risk so it won’t adversely affect your users.

The features described in this release available in pre-release to select customer and under consulting engagements. For more information contact eCube Sales at: 866. 493.4224 Ext 1.

eCube Systems Announces new release of NXTware Remote 4.6.5 for OpenVMS

Latest NXTware Remote includes NXTware Remote Builder integration to Jenkins plugin, DevOps remote deploy, and audit functions for development event on OpenVMS

eCube Systems, a leading provider of middleware modernization, integration and management solutions, announced the latest release of NXTware Remote™ 4.6.5 Agile Development Environment for OpenVMS. The new release now provides agile developers access to deploy to multiple servers and an audit feature for compliance purposes. The NXTware Remote Deploy functions allow developers to compile and test on their development server and deploy their executables for QA or production in a modern DevOps manner.

In addition to SCA, the new version of NXTware Remote adds additional source control library features like the CMS ANNOTATE function as well as a view for the CMS SHOW HISTORY command. More updates are planned this year include the addition of an audit service, and creation of a new NXTware Remote Test Server to enable automated deployment across the enterprise.

"We are seeing more requirements for NXTware Remote for agile development on OpenVMS and implementing these features into new releases. Remote Deploy is the latest enhancement to provide DevOps capability for OpenVMS developers. Being able to integrate with Jenkins to run automated builds from within Eclipse is another important step in evolving the legacy development environment with modern tools," says Robert Doyle, Chief Architect of eCube Systems.

The features described in this release available in pre-release to select customer and under consulting engagements. For more information contact eCube Sales at: 866. 493.4224 Ext 1.

eCube Systems Announces NXTera 6.5 RPC Middleware Tools in NXTware Remote for OpenVMS and Linux

eCube Systems, a leading provider of middleware modernization, integration and management solutions, announced the release of NXTera™ 6.5 High Performance RPC Middleware. NXTera 6.5 includes an all JAVA broker with NAT support, JDBC database access for Entera servers, Eclipse tools for COBOL, FORTRAN, BASIC and C# language integration; and enhancements to its generation of C# and JAVA services interfaces and clients.

In addition to Linux, NXTera now supports OpenVMS and can make migration or legacy modernization on that platform much easier. NXTera Workbench is now integrated into NXTware Remote for seamless agile development functionality. New server stub generators for BASIC, COBOL, C, FORTRAN and Java enable OpenVMS developers to create multi-tier, multi-language applications with both RPC and Web Services connectors.

"As more developers need agile development tools on multiple platforms, NXTera continues to expand to different platforms," says Kevin Barnes, President and CEO of eCube Systems. "Extending inter-operation between newer technologies and legacy languages is the key to helping drive IT innovation."

Visit the Press Release for more information

eCube Systems Extends NXTware Remote Development Platform to Linux

eCube Systems, a leader in legacy systems evolution, today announced the addition of the linux platform to its NXTware product line, NXTware Remote Server and NXTware Remote Studio. The products, marketed jointly as NXTware Remote, work together to make it easier for teams of developers to develop code on local workstations running Eclipse -- to compile, debug and deploy on a remote OpenVMS or linux server. Previously, MXTware Remote development wizards and tools were only available on OpenVMS – taking advantage of a distributed development environment, where they could offload development resources to a workstation. Now with NXTware Remote for Linux, advanced development functions on OpenVMS can now be used for linux so that developers on both platforms can share the same interface, saving time, reducing costs and speeding development. Customers continue to look for simpler and faster means to develop and deploy applications. The new NXTware Remote suite of tools for Linux servers help customers, who have standardized on Eclipse, to gain the benefits of the distributed development on Windows, linux or Mac.

Visit the Press Release for more information

VMS Software, Inc. Bundles Agile Development Environment with Latest Release of OpenVMS Operating System

VMS Software, Inc. (VSI) announced the worldwide availability of eCube System’s (eCube) NXTware Remote Server and Client software as part of the software media kit for VSI’s newly released OpenVMS Version 8.4-2 (Maynard Release) operating system for HPE Integrity servers. This initiative continues VSI’s commitment to modernizing the OpenVMS platform, by availing the OpenVMS development community of leading edge, OpenVMS 3GL and Java Remote development tools, based on the Eclipse IDE client, with distributed services on OpenVMS...

Visit the Press Release for more information

NXTware Remote 4.5 for OpenVMS and Linux

NXTware Remote is a multi-platform distributed development environment using Eclipse that enables programmers to develop in multiple 3GL languages on a variety of legacy platforms.
New Pascal editor for Eclipse
CMS class support

Visit the Press Release for more information

eCube Systems Announces New DevOps Solution NXTmonitor

The new name, NXTmonitor, conveys the visual nature of eCube’s Application Performance Management system of the successor product for NXTminder.

eCube Systems, a leading provider of middleware modernization, integration, and management solutions, announced the release of NXTmonitor, a full featured application orchestration solution. NXTmonitor, which inherited the code base of NXTminder, has been extended to support multi-discipline processes and will act as a DevOps utility in a heterogeneous enterprise environment. Previously, NXTminder was packaged with NXTera middleware to configure and manage Entera and NXTera RPC servers.

“Since we are widening the focus of this solution to DevOps, we felt the need to change the name to NXTmonitor to accurately reflect the operations monitoring features it provides,” says Kevin Barnes, President of eCube Systems.

NXTmonitor will provide immediate benefits to operations as a distributed application configuration, deployment, testing, and monitoring tool. It will aid in the migration and deployment of an application throughout its entire life cycle as it progresses from development to testing to production environments.

As a DevOps utility, NXTmonitor will:

  • Detect and fix problems (when possible)
  • Notify operations of current state
  • Provide audit logs for problem resolution
  • Manage application dependencies
  • Monitor application health
  • Perform intelligent restarting and capacity planning

NXTmonitor is a platform-independent process control and application management tool designed to simplify the runtime operation and dependability of web, cloud, and enterprise applications built on distributed processes, services, and scripts.

About eCube Systems

eCube Systems offers a family of middleware evolution products and services that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. Fortune 1000 companies and government agencies turn to eCube Systems to reduce risk, extend ROI, and increase productivity as they consolidate existing capabilities and evolve legacy systems to contemporary SOA platforms.

eCube Systems, LLC, is headquartered in Montgomery, Texas with marketing offices in Boston, MA and R&D in Montreal, Canada. For more information, visit us at http://www.ecubesystems.com/Contact.html

The Future of OpenVMS: An Analysis of eCube's 2015 OpenVMS Community Survey White Paper Now Available

Over the last year, there have been many changes in the OpenVMS space. Of course, no change was bigger than HP licensing OpenVMS to VMS Software (VSI). In 2014, HP released a road map that indicated a limited future for OpenVMS beyond 2020. Until the VSI announcement, many companies and OpenVMS Community members were openly concerned about the long-term viability of OpenVMS and their strategic assets that depended on it. Companies liked or had been satisfied with OpenVMS, but they were uncomfortable or unable to consider the prospect of using an unsupported Operating System. As a result, many organizations began to seriously question their commitment to OpenVMS, consider a future without it and, in some cases, begin migrations.

The VSI announcement eliminated a lot of uncertainty in the OpenVMS Community and provided developers with a new OpenVMS road map that extends beyond 2020. We were curious how the news impacted the OpenVMS Community so we conducted a survey.

After going through 73 responses, we wrote a white paper that looks at:

  • The Impact of VSI
  • Factors Driving OpenVMS Decisions
  • OpenVMS in the Future

Click here to download the white paper.