Tuesday, August 17, 2010

Confronting International Regulatory Compliance: Web-based GTM Solution

TradeBeam Background

Since its founding in 2000, TradeBeam has grown rapidly to become a major force in the GTM marketplace. Through a series of strategic acquisitions and the integration work of its management and product development teams, the company has embarked on a mission to bring to a hosted, eventually end-to-end GTM solution to market. This solution aims to link the physical and financial supply chains enabling companies to manage and execute global trade activities from within a single software platform. More than 3,000 enterprises worldwide currently leverage TradeBeam's GTM solution, including such industry giants as Neiman Marcus, Liz Claiborne, General Motors Holden, Delphi Automotive Solutions, and Stryker Instruments. In an effort to expand its product footprint, TradeBeam announced the acquisition of Open Harbor, a leading provider of international trade logistics (ITL) solutions. The terms of the 2004 deal were not disclosed.

Part Two of the TradeBeam Keeps on Rounding Out Its GTM Set series.

With a forecast for positive cash flow in 2005 and no current debt, TradeBeam has the funds to expand sales, marketing, and international operations to further establish its leadership within GTM. In addition to expanding its sales and channel development, the vendor plans to extend product functionality, to areas such as cargo insurance, foreign exchange, customs auditing, and transfer pricing. It also may use its capital resources to pursue additional acquisitions that support its strategy for long-term growth and leadership.

Currently, TradeBeam targets customers in cash sensitive industries. Organizations can quickly realize the value of automating the entire global transaction, from order through to payment. It also helps organizations benefit from moving beyond physical optimization towards improving operations. Improved visibility, security and regulatory compliance, contract compliance, vendor management, speed-to-market, quality of service and risk mitigation are some of the areas it targets. TradeBeam also helps companies achieve financial optimization in terms of accounts receivable reduction, inventory levels reduction, interest costs savings, and promises reduced penalties, write-offs, and overhead.

At the end of 2004, TradeBeam announced that it had successfully completed two deployments of its software in support of the DHS' OSC initiative. DHS is one of the largest international shippers and its OSC initiative is a collaborative effort between the federal government, the business sector, and the maritime industry to develop and share best practices for the safe and expeditious movement of cargo. Its goal is to protect the global supply chain while facilitating the flow of commerce. TradeBeam's solution was implemented in fewer than fifty days across two global trade lanes, and is now providing real-time tracking, monitoring, exception management, and reporting on dozens of physical and financial supply chain events and exception conditions. DHS uses TradeBeam's GTM platform to manage security from the foreign factory to port, for the ocean and land transportation of cargo shipping containers. It uses global trade event tracking for order, logistics, and payment management and the shipping system integrates radio frequency identification (RFID), global positioning system (GPS) fencing, and chemical and biological sensors.

TradeBeam was reportedly selected to be a key participant in DHS' OSC trade lane trials because of its ability to monitor, evaluate, and manage the physical and financial supply chains for inbound international shipments. Its software also detects and responds to potential security issues across an enterprise's global operations. TradeBeam's OSC solution ensures shipment visibility and compliance for import processes that touch multiple systems and supply chain partners. Time-consuming and error-prone manual processes are replaced by an automated collaborative solution that provides supply chain electronic management (SCEM), order management, party screening, risk management, trade documentation management, reconciliation, and RFID tracking.

This is Part Two of a five-part note.

Part One discussed TradeBeam and GTM.

Part Three will discuss tackling the supply chain.

Part Four will detail TradeBeam's GTM solution blueprints.

Part Five will cover competition, challenges, and make user recommendations.

TradeBeam Defines GTM

A major funding announcement (see part one) to expand its functional footprint, the deployment its solution for DHS, and the Open Harbor acquisition, further validates TradeBeam's definition of GTM. TradeBeam's aspirations of "managing the entire life cycle of a trade across domestic and international order, logistics, and settlement activities to improve operating efficiencies and working capital" appears to becoming into fruition. Considering the impact and applicability of global trade information across various functions, such as sourcing, network design, logistics, product development, etc., companies should view their entire enterprise platform as a GTM solution. In other words, in order to gain maximum value, companies must integrate GTM functionality across multiple business processes and applications.

TradeBeam also defines global trade as encompassing the life cycle of a global buy-sell transaction comprising of participants (sellers, buyers, freight forwarders, banks, etc.); tasks (compliance check, booking transportation, clearing customs, applying for letters of credit etc.); and documents (sales order, invoice, packing list, letter of credit, bill of lading, etc.). These global trade processes require the concurrent management of the flow of goods, funds, and information.

Therefore, some enterprise applications, such as ITL and GTM, simply to lend themselves well to the hosted model. Due to their widespread nature, they cannot efficiently work the other way. Namely, global import/export "procure-to-pay" or "order-to-cash" processes entail a number of activities, such as sourcing suppliers and customers; processing purchase and sales orders; insuring goods; and issuing and receiving letters of credits (LC). It also involves financing trade; arranging shipping; creating trade documents; ensuing customs compliance for export and import; sending and receiving goods; sending and receiving invoices; reconciliation; and initiating and receiving payment (see figure 1).

On a more granular level, these activities belong to the following sub-processes:

* Order. Plans demand needs, manages bills of materials (BOM), manages product catalogs, checks inventory status, creates purchase orders, checks compliances, manages inventory, manages purchase orders, assesses supply chain management (SCM) risk, acknowledges order, classify goods, calculates landed costs, manages contract, insures goods, and obtains credit insurance

* Finance. Applies and manages LC, manages documents collection, manages open account, requests financing pre- and post-shipment, checks compliance, assesses SCM risk, and arranges foreign exchange

* Ship. Requests booking, books shipment, creates ship notification, creates shipping documents, manages shipping notification, manages shipping guarantee, tracks shipments, manages events, assesses SCM risk, manages customs, clears customs, receives goods, and manages returns

* Settle. Creates and presents invoices, reconciles documents, manages disputes, prepares and presents documents, manages insurance claims, and receives remittance

Given the detail involved in each of these processes, plus the fact that they stretch over many jurisdictions, many of these can only be efficiently fulfilled through a Web-based hosted solution, which prices per transaction. To optimally complete the global trade cycle, a business must automate, track, and provide visibility to the entire GTM process to optimize its supply and distribution chains. Because Web-based services are steadily growing, TradeBeam's model seems to be ideal.

The average global trade cycle of order through settlement is 120 days, whereas a comprehensive hosted GTM solution like the from TradeBeam, can reduce this cycle by an average of 12 days, improving the users' cash flow by 10 percent or so. TradeBeam's ability to do this has been made possible by its acquisition of over twenty GTM/ITL-related application components during last few years. All have been rewritten to function in concert. Its recent acquisition of ITL specialist Open Harbor is also technologically compatibility with TradeBeam. Like its new parent, Open Harbor is an application service provider (ASP) built for on-demand Web services developed on an n-tier architecture. The architecture includes application servers, a storage area network (SAN), database servers, reporting servers and database clusters with the firewall and file transfer protocol (FTP), gateway servers, domain name system (DNS), simple mail transfer protocol (SMTP) servers, Web servers, and load balancers, and routers outside the firewall.

TradeBeam's solution was built on a platform that leverages many commonly accepted industry standards such as Java 2 Enterprise Edition (J2EE), extensible markup language (XML), or Java Messaging System (JMS). Additionally, by using collaborative workflow, business rules engine, security architecture, third-party integration via XML or electronic data interchange (EDI) transformation, the solution extracts relevant information from diverse enterprise resource planning (ERP), SCM, customer relationship management (CRM), supplier relationship management (SRM), and legacy systems. It provides a comprehensive suite of on-demand application services for order fulfillment across a multi-tier supply chain consisting of buyers, suppliers, distributors, forwarders, brokers, government, carriers, banks and insurance institutions. The system supports both online transaction processing (OLTP) and on-line analytical processing (OLAP) modes, BEA Systems' application server, Oracle Database, MicroStrategy's reporting and analytical solutions, and webMethods' Web servers. It provides stable performance for over 10,000 users, has high availability of 99.87 percent, and 24x7, around the clock operations.


Figure 1. Global order-to-cash and procure-to-pay cycle.

The Open Harbor acquisition also seems to be compatible because both vendors' have a recurring revenue model of transaction-based pricing. Such a model is often been beneficial to users, since buyers not only want to pay less for import/export, and GTM software, but also want to spread their payments out.

TradeBeam and the Internet

The number of users wanting solutions delivered over the Internet with monthly subscriptions or transaction-based fees has noticeably increased. Most new customers want a transaction-based model rather than a straight purchase with a big payment up front (see Trends in Delivery and Pricing Models for Enterprise Applications). Moreover, an enterprise-wide, on-premise approach to global trade and logistics might not be the best approach because of high costs and implementation difficulties. In fact, the products with the broadest appeal for global trade today might be hosted, Web-based solutions, which companies can tap into outside their firewall to deliver supply-chain visibility and event management, multimode logistics execution, import and export management, and trade security to enterprise shippers.

Such a Web-based tool is not just the obvious choice for connecting to far-flung carriers, forwarders, and other service providers, but is often a better approach than ERP-oriented solutions for trade compliance and documentation. Namely, ERP systems usually have only product marketing descriptions in their item master data, not the technical descriptions needed for regulatory compliance. So, for example, if Apple Computer is importing PowerBooks, the name and associated marketing description of the product would not be adequate for US Customs. Trade compliance applications take the marketing description from the purchase order and associate it with a commercially acceptable description and the correct HTS classification. The PowerBook would become listed as a laptop computer with certain features and specifications and the right HTS code number. All of this happens over the web. In the case of TradeBeam, it then complies with a twenty-four hour rule, and based on the importer's purchase order, and information about the customers' products, it creates shipping instructions for the forwarder and sends it to the carriers for their manifest.

This Web-based system, architected to connect to trading partners around the world, should be faster, easier, and better than taking an enterprise-based system and trying to turn it into a global logistics system, which are notoriously difficult to integrate with a large network of users. Also, hardly any company would want its ERP master data going directly to their vendors. It is far more secure to have a system that takes only absolutely necessary data from an ERP or back-office system, and sharing just what is needed with the supplier.




SOURCE:
http://www.technologyevaluation.com/research/articles/confronting-international-regulatory-compliance-web-based-gtm-solution-17986/

RFID Case Study: Gillette and Provia Part Two: Challenges and Lessons Learned

Challenges

Radio frequency identification (RFID) is constantly on everyone's lips and every relevant enterprise application vendor is hedging its bets towards becoming RFID-ready (see RFID—A New Technology Set to Explode?). Provia Software (www.provia.com), a privately-held provider of supply chain execution (SCE) software solutions, too can tout the results of its RFID endeavors, as it has already put much effort in terms of the proof of concept in the field. Provia and other vendors are responding to the demand for an RFID-compliant solution by Wal-Mart, Target, Albertsons, the US Department of Defense [DoD], and so on.

In September 2003, Provia was likely the first SCE vendor offering full RFID support for a warehouse management system (WMS) in a standard product, which was already compliant with the most recent EPC specifications at the time. Provia has also worked with a high-profile client, Gillette, to test RFID support as part of the client's plan to track selected RFID-tagged items through the supply chain.

As one would imagine, the Gillette project was a hard learning place that tested the ability and determination of multiple technology providers to develop a scalable system based on the successes and failures of the pilot. Namely, in addition to Provia, whose WMS and transportation management systems (TMS) applications had to be meanwhile upgraded to take advantage of EPC data, Sun Microsystems, Alien Technology, and Tyco-Sensormatic, which have supplied different pieces of the required hardware, and OAT Systems, which has provided its Senseware middleware needed to filter the RFID data coming from readers, were the major vendors involved, developing expertise that will not only benefit Gillette, but also many other companies implementing in the future.

One of the major challenges during the project was dealing with bulk reads and misreads (false positives and false negatives), given that a pallet passing through the RFID reader outfitted gates will create a number of scans. To that end, the Senseware software was used to filter redundant reads, like in the case of a pallet driving past the reader, backing up for whatever reason, and driving past again, which would cause the reader to register the tags on the cases three times.

Another case would be if the pallet has already been received into inventory but is scanned again in the location for whatever reason—the software has to recognize that the inventory does not have to be added or adjusted unnecessarily again.

Also, in the case where an item is read in the wrong location, the software should send an alert via an RF terminal to an operator that should then investigate.

The above examples show only a part of the required verifications and additional business logic that RFID deployment may demand. While an additional software layer might help with verifying and reconciling inconsistencies and overflowing or missing data, it is apparent that RFID imposes quite a more complex data manipulation.

This is Part Two of a two-part Case Study.

Part One provided Background and Set the Goals.

Auxiliary Challenges

Having also experienced some occasional auxiliary challenges of the lack of availability of printers capable of delivering RFID, barcode, and human readable labels, also recently in April 2004, Provia has formally partnered with Printronix Inc., the leading integrated supply chain printing solutions manufacturer. As a certified systems integrator and value-added reseller (VAR), Provia will include Printronix's RFID solutions as part of the company's overall RFID solutions offering and work with Printronix to support end users with RFID project planning and deployment strategies. To become a certified RFID partner, Provia had to demonstrate an existing RFID expertise, as well as the ability to integrate and install RFID hardware, an area in which the company has tremendous experience, due to its German-based parent company, Viastore Systems, a developer of material handling and automated storage and retrieval systems (AS/RS) in warehouses.

As a certified partner, Provia is now authorized to resell and support Printronix' Smart Label Developer's Kit, which helps companies create RFID technology applications within their own environments, and Smart Label Pilot Printer, which helps companies migrate from a development environment to RFID pilot activities. Printronix was reportedly the first manufacturer with an ultra high frequency (UHF), Class One smart label solution available commercially to help Wal-Mart, the DoD, their suppliers, and other retailers conform to RFID specifications.

This way, Provia and its Viastore parent represent an over $100 million (USD) in revenues, global SCE, material handling automation and RFID provider, with over $45 million (USD) in software revenue, 400 employees, over 1,000 customers, and more than 2,000 installations worldwide.

Provia and Viastore instead, believe the ability to offer a complete RFID compliance solution, with the software, hardware, and automation equipment needed to minimize investment, while maximizing results, is what companies needing RFID compliance truly desire. Being able to get it from a single company makes it even more attractive, which Provia touts as especially appealing to its third party logistics (3PL) customers as they can offer it as a value-added solution to their clients. As also partly shown by Gillette's project, this RFID solution can be incorporated at numerous points in the supply chain, from manufacturing, to receiving, picking, or shipping. Initially, many companies will naturally look at addressing RFID at outbound shipping in a "slap-and-ship" manner, but to achieve full benefits beyond RFID compliance, companies will have to utilize RFID further back (upstream) in their supply chain. Provia believes this way its solution allows and is designed for incremental adoption throughout a supply chain.

However, RFID has not been the only focus in Provia's recent partnership and product enhancements endeavors. Provia might also stand apart from its peers in the enterprise applications industry by claiming that behind every one of its installations is a satisfied client. The company touts its number one asset is that it keeps its commitments and delivers on time and within budget, and thus, 98 per cent of its clients renew their 24 x 7 support contracts with the vendor every year. Aiding Provia on the implementation front are several integration partners, including general consulting houses like former PricewaterhouseCoopers (now IBM Global Services), Deloitte Consulting, and smaller system integration services firms like former Digiterra (now ciber), St. Onge, and Q4 Logistics.

Lessons to Be Learned

However, RFID compliance will mainly mean additional costs unless the holistic supply chain business processes are also modified in the process. Many enterprises can learn much from the Gillette's ViaWare WMS RFID deployment experience—like in the case of some other success stories (see ROI for RFID: A Case Study), the benefits are achievable, but one has to beware of still unproven technology, which seems to be heading for the mainstream and boardroom priorities, almost directly from scientific labs, of course with a number of caveats due to the technology's current imperfection level (see Leveraging Technology to Maintain a Competitive Edge During Tough Economic Times—A Panel Discussion Analyzed; Part Four: RFID Software Issues). The giant retailers' compliance mandate has unfortunately preceded the achievements of applied physics and computer science. Thus, as noted earlier on, the trickiest part of using RFID at the case and pallet level is to position the readers and accompanying RFID gear correctly for accurate reads, which means lots of testing, manual intervention, and tweaking on the floor before reliable automation is reached.

Another related big issue encountered so far by most early adopters would be getting an accurate scan on a mixed pallet. Although RFID tags can in theory streamline complex stock-handling processes, enterprises should not assume that this will reduce the need for staff and processes in exception handling. On the contrary, in many cases, the resource overhead requirements for RFID implementations can often be even greater than traditional methods. Namely, there are no built-in default reconciliation mechanisms to validate whether the data was read or not, which imposes visual checking of goods as a means of reconciliation, which in turn might remove much of the touted value proposition of RFID. Thus, enterprises should conduct a number of tests on the plant, since laboratory-environment testing is often insufficient to determine RFID tag performance in real-life warehouse environmental and system conditions.

Users should also look warily at many vendors' claims of RFID readiness by citing that their applications are designed for automated data collection since they have been doing it for years with RF technology, and that RFID is yet another format. Namely, the process to gather bar code data follows a very structured and straightforward practice, requiring a stock keeping unit (SKU), case, or pallet to be scanned individually, whereas in an RFID environment, data collection is not such a discrete process. Namely, a bundle of data is collected in one scan, regardless of the variety or quantity of product, while in its raw form, the data shows no relationship between pallet, case, and SKU, necessary for inventory integrity. Therefore, a middleware, similar but more complex than those developed for RF and automated material handling equipment is required by the vendors to transform an unstructured mass of data into an input the system can understand and process.

In any case, software vendors will thus have to create new data fields to cope with the inevitable data deluge by ensuring that data tables, transaction systems, and data warehouses can handle all of it, and, in general, vendors have been by and large responding to the RFID challenge. The likes of Provia, which have significant installed bases in retail and consumer product goods (CPG) sectors, have been leading the pack by developing the RFID interface to their applications and by adding software modules or upgrading their products to cope with the serial numbers in RFID tags.

But these effective albeit not necessarily neat solutions will still require suppliers and retailers to deploy specialized middleware and hardware that manages the huge amount of data coming from the readers. The more proactive companies are thinking about putting the right business intelligence (BI) and analytic architecture in place to make the most out of RFID data and drive better supply chain decisions. Users should also check out these vendors' services at their labs that include consulting and integration, as well as painstaking testing multiple vendors' RFID equipment and hardware to simulate real-world supply chain business processes. Full and careful consideration should be given to vendors that have experience laboring in trenches and that have done it many times before.

One of the main obstacles is the lack of integration, since there is a dearth of software tools from enterprise application integration vendors to get data from RFID tags and readers into existing business systems, meaning that companies are often forced to do expensive custom integration work. Together with the vendors, they will have to devise ways to filter out false or redundant reads and pass on only useful information to enterprise applications. Managers will have to devise policies on how much data to collect from RFID systems, which signals to record, which to ignore, and which to forward to a transactional system or a person for an action. Such policies could eventually be coded into business logic of SCE applications or some type of a business-rules engine, and then enforced by middleware.


SOURCE:
http://www.technologyevaluation.com/research/articles/rfid-case-study-gillette-and-provia-part-two-challenges-and-lessons-learned-17432/

The Future for an E-sourcing Solutions Builder

The Upcoming Attractions

2006 has been (and will continue to be) a year of sustained packaging in terms of the marketing message, and for product and service enhancements and delivery for TradeStone Software, Inc. (www.TradeStoneSoftware.com), a provider of collaborative e-sourcing solutions for Global 2000 companies.

Part Four of the series Collaborative Sourcing Solution Vendor Leaves No Stone Unturned.

For information on TradeStone's history, see Collaborative Sourcing Solution Vendor Leaves No Stone Unturned. Also see Well-designed Solution for Sourcing: Its Technological Foundation and How It Works, and Web-based Solution Steps Out for Cohesive Retailer Sourcing.

The five major functional modules detailed in Web-based Solution Steps Out for Cohesive Retailer Sourcing will eventually represent the three major logical areas:

1. retail product lifecycle management (PLM), via extension of the TradeStone Product module
2. global sourcing order management, via virtual merger of the TradeStone Sourcing and Order Management modules
3. supply chain logistics management, via virtual merger of the TradeStone Logistics and Finance modules

Having certainly fared better than its previous incarnation as RockPort, in its third year of existence TradeStone now employs about fifty employees at its headquarters and offices in Atlanta (US), Bangalore (India), and London (UK), and staff growth will continue for the foreseeable future.

Furthermore, things seem to be looking up, since IBM recently selected TradeStone as the global sourcing and order management linchpin for its retail supply chain solution at the La Gaude Centre for Supply Chain Excellence, alongside SAP, i2 Technologies, Galleria, and DemandTec. The roster of customers has grown to about a dozen, now including such names as The Limited, JC Penney, Federated Stores, The Home Depot, Pacific Alliance, Stride Rite, KarstadtQuelle, and Guitar Center. Although the vendor's current management team and professional services organization have over 250 years of combined retail experience to ensure customer success, the up-and-coming commerce communities are expected to spur more quality control functionality, and some customers have already been recognized for their successful deployments and results.

As for ongoing product enhancements, early in 2006 TradeStone announced the availability of TradeStone Suite v. 3.5 (currently in production at several customer sites), which introduced several planning capabilities that bind existing sourcing and order execution functionality, and which features significant enhancements to the Finance and Logistics modules, with the idea of continuing to foster rapid adoption and deployment across expanding supply chains. The new features in Version 3.5 support retailers, vendors, and manufacturers in building out their own exclusive TradeStone Commerce Communities, whose members should benefit from the suite's ability to connect planning, sourcing, and order execution, with access across multiple applications. This should provide a financial and merchandise view of sourced and ordered items across all production phases, as the system captures committed-to quantities, approved quantities, and on-order quantities, by selling channels, production status, and financial commitment.

TradeStone Commerce Communities better unite retailers with their suppliers (for instance, Deutsche Woolworth with 2,000 suppliers; American Eagle with 400 suppliers; and Pacific Alliance with 800 suppliers), agents, and inspections. They provide specialized services for suppliers, including inspection services, quality testing facilities, documentation services, financial services, and so on. They also provide supplier and vendor report cards for better supply base rationalization, as well as visibility into available capacity.

Global Trade Infrastructure Building Blocks

Going forward, the vendor will continue to round out its global trade infrastructure, which on a high level, will consist of the following building blocks:

* a Fulfillment Center, to provide supply chain execution, global order management, e-document generation, invoicing, financing, dynamic trace and track, global cost calculator, and data normalization;
* a Trade Tools Center, to provide StepBuilder business processes, composite views across multiple systems, and collaboration across multiple parties;
* an Information Center, with vast data on standard codes, currency information, trade risk reports, government information, centralized corporate libraries, and international documentation templates; and
* a Community Center, to provide services such as registration, partner profiling, e-links to banks, agents, government, virtual trade missions, online showrooms, and logistics and IP providers. ("IP" stands for "information pooler," a service that aggregates and disseminates data such as new freight rates. It can be a static data pool or dynamic—one can pull information from it, or the source can push information into the community).

The idea behind the creation of TradeStone Commerce Communities is to reduce the cost of doing business globally by facilitating the movement of ideas, information, goods, and money. As described earlier, the TradeStone Suite provides buyers, merchandisers, suppliers, vendors, and banks with a single view of financial information across the entire purchasing process—from the initiation of an order, right through to final payment. From the moment a purchase order is entered into the suite, the order details are stored centrally, and are then used to automatically pre-populate subsequent standard forms, such as advanced shipping notices (ASNs), bills of lading (BOLs), commercial and service invoices, and payment information.

The following upgrades available in the TradeStone Suite v.3.5 should further automate this process:

* Letter of Credit Processing: This feature will unite the buyer, supplier, and their financial institutions, as this virtual link will allow the supplier to collect new orders and present them electronically to the financial institution or financing partner in order to receive any necessary drafts or cash advances to pay for raw materials, new machinery, or any quality assurance tests necessary to begin work on the new orders.

* Packing List: In order to save time and eliminate redundant data entry, suppliers build customized packing lists from original purchase orders for each shipment. The packing list includes information on bar codes, radio frequency identification (RFID) tags, and containerization specifications, and also ensures suppliers are in compliance with buyers' documentation requirements. Accurate, timely, and standardized documents in turn mean faster clearance through customs.

* Logistics and Finance Documentation: All shipping and banking papers will be automatically pre-populated with order information drawn from original purchase orders, thereby saving valuable time by using data that is already available in the system. These documents and their detailed information regarding carriers, shippers, country of origin, export country, import country, and final destination, are essential for global trading security standards and for clearing customs without delay.

* Payment Builder: Once a buyer or merchant approves an invoice for payment, other TradeStone users in the finance department will be alerted, and use this feature to authorize payment, eliminating inter-office memos, cutting checks, and other manual procedures. The Payment Builder will be automatically updated with any pre-payments, change orders, and advanced shipments, so that there is never a question of what to pay, or when.

* Payment Summary: When a payment is made, a payment summary report is automatically generated, providing a view into each payment and the related invoices, and allowing the buyer to link back and forth between payment records and original invoices to a concise reconciliation of each transaction. This information is critical for chief financial officers (CFOs) looking for one report to show the reconciliation between the items that have been committed to, the items that have been manufactured towards the plan, and the items that have been paid for towards the season's plan.

Product Lifecycle Management for Retail

Early in 2006, TradeStone also announced that it had added PLM capabilities to the TradeStone Suite. The TradeStone PLM for Retail module (which is also in development testing now, with key customers) will address the specific needs of the apparel, footwear, and hard lines communities, by providing the collaboration tools necessary to automate and more easily manage the product development process, from initial concept through to delivery. It will also provide the tools to enable a retailer and its suppliers to collaboratively develop new products, better manage the quality testing process, meet milestone deadlines, and rate each party's responsiveness with scorecards. All this should enable products to speed up through the supply chain and reach the sales floor faster.

TradeStone's PLM module will address the fashion industry realities with a series of vendor collaboration tools designed to facilitate the more accurate communication of design iterations between the technical design group, merchandising, and the factory.

For an extensive discussion of global retail sourcing, see The Gain and Pain of Global Retail Sourcing, The Intricacies of Global Retail Sourcing, and The Fashion and Apparel Retailers' Conundrum.

The solution will thereby monitor the progress of a product, and assure quality throughout the process, starting with the design concept, the product brief, the technical package, the request for quote (RFQ), the order, and all phases of testing, right through to delivery. Such tools can speed up the product design phase, while making the manufacturing and testing phases up to 30 percent more efficient, which can shave significant length off the supply cycle time. The tools will include a number of components:

* Time and Action Calendars: Since quality assurance (QA) and control milestones are key to product design and production, keeping tabs on those benchmarks is critical to the entire process, and the product will accordingly enable alerting, thereby promoting real-time collaboration to resolve issues and keep product moving toward the store shelves. Pervasive time-and-action calendars for buyers and suppliers automatically assign production milestones (for everything from fabric samples, lab dips, and washability tests, to final product quality assurance testing), while master calendaring layers in additional work in progress (WIP) milestones for manufacturing (including bill of materials [BOM] receipts dates, piecing, assembly, and finish trims). The order status visibility function provides statuses on BOMs and WIPs, along with approval workflows across designers, merchants, and factories, which all can view and reconcile orders per any selling channel.

* BOM Aggregator: Testing fabrics and trims begins before garments are ever assembled, and continues throughout delivery. By understanding where common components—such as fabrics, trims, and accessories—are used throughout the collection, retailers should be able to swiftly address any quality testing failures across multiple products.

* Component Library: Retailers traditionally work off a base of approved configurations for a given season or for product families, and this feature will enable them to have a growing database of approved designs, as well as configurations or pieces (such as fabrications, buttons, zippers, trims, and embellishments). With bulk purchases of these key components, merchants should not only be able to receive better pricing on raw materials, but should also be able to take advantage of leftover fabrics and trims to create additional items, such as accessories or limited edition specials.

* Packaging Specifications: All too often, packing specifications are left up to the supplier, arriving at the retailer's distribution centers only to be rejected. To that end, hang tags, care labels, and inventory control tags will be specified in advance, stored in the component library, and used on multiple garments.

* Party Scorecard: Retailers and manufacturers want to build their relationships with confidence in the quality of their raw materials and finished goods, even when trading partners are located across the globe. This tool will enable retailers to pre-qualify suppliers for a particular order based on their previous work and certifications, and to grade them on the quality of new orders received. The scorecard is also a 360-degree evaluation tool for suppliers, enabling them to rate retailer timeliness with feedback, problem resolution, and payment information.

* Integration with Google Earth: Integration with Google's geospatial locator application will mean that retailers and vendors will be able to identify, on the spot, their network of approved testing facilities to speed product through the rigors of testing. This should also reduce travel expenses by auto-assigning resources based on commodity expertise, inspections locations, and resource availability.

The year 2006 also marked the debut of TradeStone for Trading and Product Development Companies, the latest addition to its TradeStone Suite. Through a series of customer-driven enhancements, it supports organizations in the collaborative development of their products and brands, allowing them to create product line and product collection offerings to their customers or internal departments. As buyers narrow their selections to a collection within the line, the results on buying budgets and initial mark-up within a delivery and floor set is dynamically presented.

Central to TradeStone for Trading and Product Development Companies is the collection review capability within the Virtual Showroom, a secure online workspace that enables local retail buyers and merchants to select pieces, as well as attributes (cuts, colors, sizes, and packaging) from a general collection, so they can create local brand extensions of private-labeled merchandise. The Virtual Showroom provides a sophisticated way for buyers and merchants to examine the line by collection, summarized by selling channel, delivery period, class, and sub-classes. The visual presentation of product lines and collection portfolios gives buyers the speed to market they require in order to make changes that influence and determine future flows to stores at the color, size, and style level, based on market trends. The central buying group at the trading company or the product development company can collect all the local orders in real time, compare selections, and make recommendations based on known volume discounts. Understanding what their local stores want to sell during the season, trading and product development companies can adjust mark-up percentages, compare the projected selling price to the estimated landed cost, and forecast margins per item by distribution.

Too often, many brands become locked into generic product portfolios, not by design, but by the necessity of serving a multitude of markets. Product portfolios are often developed around a "most common" denominator—a specification or attribute that will serve the largest demographic contingent. This generic offering can erode a brand—or worse, interfere with quarterly earnings opportunities. This is in a sharp contrast to the necessity of delivering fresh and innovative products, on trend and on time, to unique geographies, consumers, and product development companies. For this, retailers need tools to work collaboratively with their global manufacturers and their global outlets. TradeStone recognizes the changing roles of retailers and trading companies as they take on the challenges of developing products and of acting as their own private-brand companies. TradeStone for Trading and Product Development Companies aims at addressing those changes, and providing a platform and functionality to meet those needs. For instance, orders for product within a collection can be grouped and split into factory orders, where factories and trading company buyers and product managers collaborate on the orders to refine product flow (by color, size, pre-pack, and so on) as changes to market demand impact customer order flows. These modifications and change orders are automatically date-stamped, creating a complete order history. This change-tracking is critical, as it provides a financial and merchandise view of sourced items across all production phases. The plan/buy list provides a comparison between financial plans, the execution of pre-buy requested quantities, and on-order commitments for a period by department, class, or sub-class.

In addition to the Virtual Showroom, TradeStone for Trading and Product Development Companies includes a series of usability enhancements to provide greater insight into all activities within the buying process:

Regulation: This enhancement simplifies viewing the changes made to any transaction by anyone within the supply chain, a critical feature when demonstrating compliance with regulations such as the US Sarbanes-Oxley Act (SOX) or BASEL II (a set of banking industry recommendations by influential banking representatives from the thirteen countries of the Basel Committee on Banking Supervision).

Security: In order to fit easily into existing corporate security policies, this enhancement features many security enhancements, including hierarchical permission structures, password protection, and change-tracking.

Dynamic Data: This enhancement features Smart Tags, an interactive hyperlink utility that drills down to critical information, enabling users to display more detailed information at the click of a button.

Also in the first half of 2006, TradeStone extended its TradeStone Suite for delivery as an on-demand service (see Software as a Service Is Gaining Ground). Offered via the software as a service (SaaS) delivery model, small-to-medium retailers, apparel product development companies, and their global suppliers can now access the core functionality of the TradeStone Suite at a lower price point, without the investment in an IT and software infrastructure. According to TradeStone, the new service offering extends the vision of unified, borderless commerce, connecting even more retailers, manufacturers, suppliers, and product development companies in a global network that spans geographies and technologies. On the supplier side, TradeStone's aforementioned virtual product showroom and production collaboration capabilities support vendor offerings, demonstrating that even the smallest factories can provide credible information flow and quality products to multibillion dollar organizations. This service model strives to enable these companies to have the software up and running in as little as two weeks, and prices range from $100 (USD) to $300 (USD) per user per month.



SOURCE:
http://www.technologyevaluation.com/research/articles/the-future-for-an-e-sourcing-solutions-builder-18654/

Development of an Internet Payment Processing System

Introduction

Early in 1999, the author was asked by E-Bank (http://www.e-bank.co.yu), one of his clients, to develop the first Yugoslav Internet payment processing system. This client is a Yugoslav payment processing company that uses BankWorks software by RS2 Software Group (http://www.rs2group.com) to process transactions made at ATMs (Automatic Teller Machines) and POS (Point Of Sale) terminals.

Developing an internet payment processing system in Yugoslavia under bizarre circumstances, during bombing raids, and power-cuts, was definitely an unforgettable experience. The system was deployed in September 1999 and has worked ever since without any problems. It has sustained numerous attacks by hackers and would-be intruders.

The system architecture is very similar to the Three Dioxin (3D) model that started to emerge later. 3D model will probably become a de facto standard for transactions on the Internet, when its specification is finalized. The developed software received the Diskobolos 2000 (http://www.jisa.org.yu/2000.htm) award in the finance category - an annual award granted by the Yugoslav Informatic Alliance. This article describes a success story that is worthwhile sharing with a wider audience.

E-Commerce Applications

Before we proceed any further, we have to distinguish two types of transactions on the Internet:

Transactions of the first type are not performed in real-time. When a card holder submits payment and gets a response, payment is only posted for further processing. Actual authorization of the transaction is performed (manually) at a later time and consequently at a higher operating cost. This is acceptable when delivery of goods and services is slow, e.g., via regular mail. As an example, when the author purchased a book from Amazon.com in August 2000, the order was approved after an hour or so.

The other type of transaction is performed in real-time. When a card holder submits payment and gets a response, payment is completed. Money on the card holder's bank account is earmarked and transfer of money to the merchant's bank account is guaranteed. This type of transaction is required when delivery of goods and services is imminent (e.g., download of software or MP3 files). Despite this requirement, some e-commerce sites use the first type, and deliver goods and services based on an assumed success of authorization in the future. This approach risks losses due unauthorized transactions.

The system described in this article can configurably work in either mode. When it works in the second, fully automated mode, the system interfaces with the BankWorks system in order to authorize transactions. This interface is simple and should be easy to adapt to other systems for authorization.

Business Model

In order to offer the best options (e-commerce application cost vs. sophistication tradeoff) to merchants and card holders, based on the above discussion, a business model of transaction processing has been developed. It divides merchants on the Internet in three groups, depending on their e-commerce application:

1. Those interested in collecting payments

2. Those requiring pre-processing and authorization

3. Those requiring preprocessing, authorization, and post-processing

Merchants in the first group are only interested in collecting (often periodic) payments on their goods and services. The best examples are utility companies and subscription services. Such payments are identical to payments made at some ATMs. A card holder logs on an ATM, selects a merchant (e.g. a phone company), and enters the amount and the payment reference ID (e.g., his/her phone number). Similarly, on the Internet, a card holder can log on the portal of the Internet payment processing company and pay bills. In order to further facilitate the payment process, merchants can, for a fee, keep account balances of their customers in the Internet payment processing system. In such a way, card holders may review their account balances, get a pre-filled payment form, and simply confirm payment.

Note that, merchants do not even need their own web site. Mobtel (http://www.mobtel.co.yu), a post-paid mobile phone company, operated in such a way for a brief period of time, when it was redesigned to a more sophisticated e-commerce application (also developed by the author) that includes calculation of promotional discounts and payment pre-processing.

The next group of e-commerce applications has a more complex business logic executed at the merchant's web site. However, these applications do not have/need any automated processing after payment is completed. Delivery of goods and services is based on payment reports available on-line from the processing company. Examples of such applications are Simpaid (http://www.simpaid.co.yu), a pre-paid mobile phone company, and Eunet (http://eunet.yu), an Internet service provider. The payment process is shown in Figure 1. The final bill is presented to the card holder on the merchant's site. The information about payment (merchant's name, payment reference ID, and amount) are passed to the payment form on the payment processing site. Here, card holder fills card information and completes payment. Note that confidential card information is protected by SSL (Secure Socket Layer) protocol at the payment processing site, and the merchant does not need SSL on his/her site.

Figure 1.

The last group of e-commerce applications includes all three phases of payment processing: pre-processing, authorization, and post-processing. Pre-processing is performed at the merchant's site and may include collecting information about the card holder, calculation of cost, taxes, discounts, shipping and handling, etc. Authorization is executed at the payment processing site in a form of a remote procedure call made at the merchant's site. Based on success/failure at this step, post-processing is executed at the merchant's site. The purchase order is completed, and goods and services are delivered (e.g., MP3 file is downloaded).

Implementation of such e-commerce applications is not an easy task. A transaction consists of multiple applications executed at multiple computers across the Internet. The chain of events may be broken at any step, at any time, and for any reason (e.g., power failure or communication cable cut). For example, failure after authorization and before post-processing is completed, could leave the card holder's account charged without delivering goods and services. The e-commerce application must have recovery procedures developed so it can either backtrack (e.g., credit card holder's account and cancel purchase order) or finalize order (e.g., deliver goods and services). An example of such application is Atlantik (http://www.atlantik.co.yu), an on-line betting company.

System Users

There are three types of users in the system described in this article:

Besides purchasing goods and services on the Internet, card holders may view information about their card accounts using the portal of the Internet payment processing company. This information includes account balance, previous payments, and history of accesses to the account information (for security purposes).

Merchants may view information about their accounts using the portal of the Internet payment processing company. This information includes previous payments made to the merchant, and history of accesses to their account information (for security purposes). Merchants can also change password for access to the account information.

The system administrator of the processing company manages the system using the portal and back-office applications. The administrator may configure in the system database numerous merchant's operational parameters such as type of e-commerce applications, types of cards accepted by individual merchants, appearance of payment receipts, etc. The administrator can use detailed security and access logs to track and locate would-be intruders.

System Architecture

System architecture of the Internet payment processing system is shown in Figure 2. All system components are connected via the Internet. The system has a multi-tier architecture that is typical for applications on the Internet.



SOURCE:
http://www.technologyevaluation.com/research/articles/development-of-an-internet-payment-processing-system-16681/

Microsoft's Dynamic New Approach to Professional Services Automation

Introduction

With the recent re-branding of the Microsoft Business Solutions product line as Microsoft Dynamics, Solomon, Microsoft's flagship professional services automation (PSA) solution for the small and medium business (SMB) market, has been repackaged as Microsoft Dynamics SL. Microsoft Dynamics SL version 6.5 extends the solution's prior functionality through its business portal, as well as by offering new modules for purchase requisitions and bank reconciliations. Moreover, Dynamics SL offers its clients extensive project portfolio management (PPM) capabilities with its real time bi-directional integration with Microsoft Project Server 2003.

Replacing the old Project Green strategy, the new Microsoft Dynamics initiative will align its product line of business solutions, including Microsoft Dynamics SL, with its research and development (R&D) efforts in two waves. The first wave, which is underway and will continue into 2007, involves re-branding all business solutions under the Microsoft Dynamics banner, and developing these products to adopt a similar look and feel in terms of their interfaces and their usage of the business portal. The second wave will deliver a single Microsoft Dynamics solution that combines best-of-breed functionality from each product line by the end of 2008. To help its clients with the transition, Microsoft has put in place its Transformational Assurance plan that will allow users to migrate to the Microsoft Dynamics solutions at their own pace.

Eventually, Microsoft likely will absorb its PSA product strategy into its Microsoft Dynamics product strategy, providing a unified solution that will serve the various vertical markets requiring financials, enterprise resource planning (ERP), supply chain management (SCM), customer relationship management (CRM), and PSA solutions. However, the question remains: will Microsoft move in the direction of offering a single integrated PPM solution for service organizations by embedding Microsoft Project Server with the Microsoft Dynamics product line, or will it continue to offer distinct solutions, integrating Microsoft Project where needed? In either case, the Microsoft Dynamics product line will remain compelling to SMB organizations by continuing to offer a single technology that provides comprehensive PPM capabilities through Microsoft Project Server.

Microsoft Dynamics SL Components

Microsoft Dynamics SL is designed for organizations with 50 to 2,000 employees and revenues of between $5 million (USD) and $250 million (USD). The average Microsoft Dynamics SL project costs approximately $150,000 (USD) for both licenses and implementation, with a single user license starting at $5,000 (USD). Dynamics SL serves project-centric and service-oriented organizations, and is also suitable for distribution organizations.

Microsoft Dynamics SL comes in two flavors: the standard edition designed for a single site organization with up to 99 employees and a maximum of 10 user licenses; and the professional edition designed for multisite organizations and a maximum of 2,000 employees. The main components of Microsoft Dynamics SL include the following.

Financial Management and Payroll Microsoft Dynamics SL has a complete financial package providing general ledger (GL), accounts receivable (AR), accounts payable (AP), and payroll modules. The modules are designed to handle multisite organizations and multiple currencies. For service organizations that operate internationally, Microsoft Dynamics SL's comprehensive multicurrency capabilities (especially for billing and expenses) are a critical feature. For SMB service organizations, the financial package offers complete integrated back-office functionality, which is not typically found in best-of-breed PSA solutions.

Project Management and Accounting Microsoft Dynamics SL's strength lies in its project management and accounting capabilities, which are tailored for project-centric organizations. By providing extensive time, billing, and expense modules for service organizations, Microsoft Dynamics SL supports the detailed tracking of tasks and expenses relevant to billable projects. In comparison to other PSA solutions, Microsoft Dynamics SL provides above average resource utilization, contract management, and project control capabilities.

Field Service This module streamlines service delivery and all related activities. For professional services organizations (PSO), Microsoft Dynamics SL provides additional functionality, such as managing equipment maintenance, service level agreements (SLA), flat pricing functionality, and service tracking of account history and employee profitability. Moreover, Microsoft Dynamics SL is one of the few integrated PSA solutions that offers field service functionality to small organizations.

Distribution Microsoft Dynamics SL also differentiates itself by serving distribution organizations. In addition to providing extensive functionality to manage shipments, bills of materials (BOM), and inventory control and replenishment, Microsoft Dynamics SL supports electronic data interchange (EDI) and e-commerce capabilities. Furthermore, this module has robust order management and purchasing features that are specific to distribution organizations.

Foundation Microsoft Dynamics SL's Foundation module allows users to work with other Microsoft applications and technologies, as well as with Crystal Reports and the Business Portal. In addition, the Foundation module is delivered with a tool kit for customizations and integrations utilizing standard Microsoft technology. Consequently, Microsoft Dynamics SL provides the best tools to leverage an organization's existing Microsoft-based information technology (IT) infrastructure.

What's New in Microsoft Dynamics SL 6.5?

Microsoft Dynamics SL version 6.5 offers new modules for requisitions and bank reconciliation, and includes new feature enhancements at both the technological level and in the business portal. The requisitions module is primarily targeted at distribution organizations. It provides a request entry system for employees and multiple levels of approval per project. Following the submission of a request, managers can search for similar items in inventory and previous bids, and provide approvals and rejections of requests. In the new bank reconciliation module (found within the financial management module), users can reconcile bank statements, allowing users to correspond bank accounts to accounts and sub-accounts in the GL. They can also reconcile bank statements against transactions in the G L, AP, AR, and payroll modules.

Another new feature in version 6.5 is support for SQL Server 2005 and Visual Studio 8.0. This allows increased flexibility for customizations. There are also additional features in distribution with EDI integration and purchase order management. However, the majority of new features are found in the business portal. New features in the business portal include extranet functionality, item request entry and approval in the new requisition module, inventory lookup, multi-company enabled queries, mass import of users, copying of business portal role, a role-based home page, customer and vendor data access policies, support for Windows SharePoint Services version 2 (SP2), and management of data permissions in the administration console.

Product Strengths

Microsoft's Dynamics SL offers complete PSA functionality for project-centric organizations. Its target market is billable organizations, such as PSOs, computer and IT related services, engineering and architectural firms, and management consulting firms. The Microsoft Dynamics SL solution has also gained some traction in the not-for profit and health sectors, but it is particular strong in the construction and distribution markets. This is because extensive functionality in inventory management, order management, and purchasing provides a complete solution for distribution organizations. Construction organizations, meanwhile, can benefit from Microsoft Dynamics SL's extensive project accounting and workflow functionality, which is designed for general contractors.

In terms of providing deep functionality in portfolio management, Microsoft Dynamics SL has seamless real time bi-directional integration with Microsoft Project Server 2003, enabling project managers and stakeholders to gauge the progress of projects and the utilization of project resources. Microsoft Dynamics SL also has the distinct advantage of offering its clients a fully integrated PPM solution for PSOs by combining (and continuously enhancing) Microsoft technology with Project Server 2003 and Sharepoint Services. Furthermore, SMB project-driven organizations that have already incorporated Microsoft Project Server and Sharepoint into their infrastructure can have a comprehensive integrated PPM solution with the inclusion of Microsoft Dynamics SL.

Product Challenges

The real challenge of Microsoft Dynamics SL is in regard to its PPM functionality. Currently, project-driven organizations are required to purchase Microsoft Project Server to handle extensive portfolio management and project management capabilities. Microsoft's recent acquisition of PPM vendor UMT will only compound this problem. Consequently, organizations requiring strong PPM functionality must purchase Microsoft Project Server in addition to Microsoft Dynamics.

For the moment, with many PPM and PSA vendors pitching similar functionality, Microsoft has successfully delineated its product positioning of Microsoft Dynamics SL (as a PSA solution) and Microsoft Project Server (as a PPM solution). However, should there be a trend of consolidations between PPM and PSA vendors in the future, Microsoft's may have to integrate its PPM solution into the Microsoft Dynamics product line in the near- to mid-term.





SOURCE:
http://www.technologyevaluation.com/research/articles/microsoft-s-dynamic-new-approach-to-professional-services-automation-18403/

Congress Acknowledges Outdated Banking Laws

Event Summary

On October 22, the White House and Congress agreed to change outdated US banking laws. Until this agreement was reached, the White House had promised to veto the banking reform bill. Details of the compromise are reportedly not yet disclosed. The new legislation hopes to replace banking laws written during the Depression era, with up-to-date Year 2000 era banking laws

Currently, FDIC policy only "encourages" banks to perform information security audits. If a bank does decide to do an information security audit, the independent security auditor is hired by the bank which can create a conflict of interest. As well, today's banks are not qualified to decide which Information Technology consultants perform quality audits. Just because a consulting house is big name, and well-known, does not guarantee that they will perform an exhaustive and quality information security audit. Every consultancy who performs information security audits does them differently.

The FDIC reviews these optional audits, and assigns what is called an URSIT rating to the financial institution. URSIT stands for Uniform Rating System Information Technology and is an indicator of how well a bank manages its internal information technology systems, including the security of them. Currently, the FDIC does not have any procedures on how to assign URSIT ratings, and URSIT ratings are only made available to the banks board of directors.

Market Impact

The October 22nd announcement is a clear admission that today's banking laws do little to take internet banking, and internet banking security into consideration.

When Stephen White, an information review examiner for the FDIC was asked, " Due to all the security compromises on government systems, how can you expect the general public to have faith in the government's ability to monitor information security at banks?" he responded that today's URSIT ratings are meaningless without facts to support them.

Clearly some banking reform and regulations are in dire need. An independent auditor, not paid by a bank's board of directors, should be auditing all FDIC insured banks. The FDIC's information security audit should be standardized, and presented to various private sector security forums for review.



SOURCE:
http://www.technologyevaluation.com/research/articles/congress-acknowledges-outdated-banking-laws-15248/

Intranets: A World of Possibilities

Most banks have plans for the Internet. Even though few derive a quantifiable profit from investing in an Internet program, inertia in this electronic era may equal eventual extinction. The Internet is one of the most ballyhooed innovations of the past couple of decades, and has spawned hundreds of new technologies for the banking industry. One of these is a practical tool that few banks have seized onto: an intranet.

Paper-based documents lie at the root of the problem. From them, stems inefficient, duplicitous work processes. These documents can multiply like weeds and impede the growth of productivity in a bank. Even if a bank wants to clear its garden, reengineering work processes can perplex the most methodical individuals. The challenge is to think differently, and avoid simply turning an inefficient process into an inefficient electronic process. For some institutions, an intranet will help untangle the jungle of document management activities carried out by their staffs.

Mike Parry, Director of Web Development at Brintech, a bank technology firm, provides an example. "Say a bank wants to redefine the way that loan documents are transferred between the branch and main office, and right now they're delivered by courier or mail. The bank could decide to electronically scan the loan application and e-mail it back and forth between offices, but that really just mirrors the same old process. Or, they could store the information in one place and have the necessary parties view and signoff on it." An intranet provides a mechanism to streamline this process.

The popularity of intranets in U.S. industry is growing steadily. An intranet is roughly a microcosm of the Internet. It functions on a browser-based platform that manages many of internal functions and work processes of an organization. The difference is access-the owner holds the key. Generally, access is limited to employees within an organization, but can be extended to vendors, clients, or anyone else authorized by the intranet's administrator. Controlling access and ensuring security become important issues, particularly for a financial institution.

An intranet precisely built can thoroughly simplify work processes and provide a repository of all internal, electronic data. It empowers employees and reduces the waste that paper-based documents create. The decision to implement an intranet requires that people in each bank department progressively rethink the way that they do business. Everyone should thoroughly review workflow processes and the trail that each document travels in the bank.

Evaluating the Situation

Decision-makers first need to determine whether to invest in an intranet. To evaluate whether an intranet program would fit into a bank, a bank should address the following questions:

* Does the organization significantly produce, distribute, and update paper-based documents?

* Do employees often need to consolidate information from different places or sources?

* Does the organization require communication between people who are geographically dispersed?

* Are employees often required to research information to complete a task?

* Is the organization committed to a comprehensive reengineering project?

* Does the organization have the resources to implement and manage a significant technological project?

Commitment and resources are key components of a successful project. The intranet needs sufficient resources to complete the project judiciously and maintain its integrity. The bank must answer affirmatively to the above questions related to these factors for a successful intranet project. Decision makers should consider that some results are intangible and difficult to quantify.

Cost and Return

There are obvious costs, such as buying a server, if necessary, and hiring an intranet development firm to establish the system. Other costs include training, initial input of forms and data, work process reengineering, and work time reallocated to the project during its initial phase. Gauging the returns on an intranet investment can also be complex, particularly as they multiply based on how well utilized the system becomes within the bank. If used effectively, work productivity increases into the foreseeable future. Many paper-based processes will be eliminated or condensed, which eventually allows the bank to reallocate resources into sales and customer-related activities. And, the bank will have enhanced internal communications and centralized operations.

"The interest spread for banks is continually shrinking with increased competition, thus the banks' profits are being squeezed meaning that they all will have to look hard at their internal efficiency to remain viable," states David Koto, Executive Vice President at Brintech. "The intranet is a vehicle that will allow them to operate more efficiently with less personnel."

Functions

What can an intranet do for a bank? An intranet adapts to handle future applications and work processes. The intranet can change as the bank grows. Some examples of practical functions are to:

* Accelerate the loan review process

* Store sales and marketing materials

* Provide an online platform for product demonstrations, training, and sales presentations

* Maintain a central repository for product information

* Disseminate sales goals and performance data

* Sustain a sales contact management system

* Provide news groups and online conferences for geographically dispersed sales people and branch managers

* Publish a variety of schedules and calendars

* Help generate customer profitability information

* Store all human resources documents, including personnel policies, benefits information with online enrollment and change forms, and 401(k) material with a calculator and link to the Social Security Administration

* Maintain a storehouse of online forms

* Provide employee contact information

* Post internal job notices

* Distribute and manage employee performance reviews

* Publish current project plans and timelines

* Store Help desk scripts

* Post frequently asked customer service questions

* Maintain a problem-tracking system

* Publish financial reports and the updated budget

* Circulate online expense reports

The potential of an intranet relies heavily on the bank's ability to identify work processes for an electronic format. A good developer will help the bank with this process, and suggest functions that may have gone unnoticed. The real boon for the bank is easily distributed, centralized information. An intranet streamlines multifaceted work processes.

A Work Process Redefined

With an intranet, a bank can improve on scores of standard procedures. One example: a current hiring process may be for a prospective employee to enter the bank, complete an application, and have an interview with one or more supervisors. She is hired, and then fills out several more forms. She may provide her name, address, and birth date repeatedly. Perhaps the bank takes a photograph of her. She must then read the employee guidelines and undergo training. Administrative employees funnel various forms between the branch and Human Resources department, leaving room for oversight and lost documents. Someone tracks that the new employee accomplishes required tasks.

With an intranet, a prospective employee can enter the bank and complete an application online. She has her interview, and is hired. An HR clerk accesses an electronic task list that delineates each task the new employee must accomplish with due dates. The data from her application transfers to HR's benefits forms, branch forms, a calendar, and an employee directory. Her photo is scanned in or digitally taken, and it is temporarily posted to the main page in a new employee section, and stored for later use. She accesses the employee handbook and online training program, and certifies when both have been reviewed. HR and authorized branch management can readily access all documentation. When the employee undergoes a name change or moves to a new address, she can complete an online form that will replace the information everywhere necessary, eliminating redundant processes.

Technological Requirements

What technology is required to operate an intranet? The technology is relatively modest compared to other endeavors if the bank is already connected by a local or wide area network. Since an intranet is basically an internal web site, it requires a web server, browser software, and enough bandwidth to sustain the system. Each user needs a workstation - a bank's current workstations may be sufficient.

The entire system will probably run on one of the two most popular web browsers, Netscape or Microsoft Explorer. If the bank currently accesses the Internet, installed browser software is probably adequate. The browser concurrently provides access to the intranet and Internet.

The thorny aspect of the technical side to the project is customizing the intranet for the bank. Accomplishing this is not so difficult, but its engineers must do it in a way that invites use by bank employees. The intranet needs to be user-friendly, and accomplish its goals without intimidating technically challenged staff. A qualified intranet developer spends a great deal of time accounting for usability when crafting the blueprints for the system.

Design and Appearance

What does an intranet look like? An intranet site looks like an Internet site. The owner completely controls the appearance of the site, within the capabilities of web sites as they exist at that moment. (Web site capabilities seem to change at the rate of warp speed.) Since the intranet serves so many functions for people at different levels of technological savvy, consistency in appearance is crucial.

Regardless of the specific page layout, Internet users will find some comfort in the intranet's appearance, easing transition to the new system. Management should carefully evaluate how they want users to interface with the system. Many decisions will need assessment, such as the appearance of the page that users see immediately upon logging onto the system. The home page can be customizable by user or fixed so that the same screen appears before all users, secured areas notwithstanding.

Training and Incentives

What kind of staff training is required? The ease with which an entire staff is trained on a new system relies heavily upon their previous exposure to technology and attitudes toward it. Training is critical, whether or not a bank's staff readily accepts the new system. The bank can heed counsel by Aristotle, "The roots of education are bitter, but the fruit is sweet." Sufficient training will make an intranet program successful, and conversely, deficient training will cause discontent and dissatisfaction. The training strategy for the intranet involves several steps.

* Using the Internet. The bank should provide its employees access to the Internet. Using the browser is perfect training for the intranet. Many people have at least some online experience, and sanctioned access may motivate some to learn more. The bank should establish guidelines about when and how long employees may engage in web surfing, and consequences for abuse of the privilege. Providing employees with instruction on using web sites for job-related purposes (they do exist) would be a plus for the bank.

* Interest-free loan programs for PC purchases. This would be a good time to establish an interest-free loan program for employees to make PC purchases. It encourages computer use and means that employees train themselves on their own time.

* Incremented implementation. A bank should implement an intranet in steps. The intranet team can establish a timeline (and post it on the intranet). Each department can gain initial access to the intranet at different points, so that the Help Desk fields questions from a segment of the population at one time. In support of these efforts, the intranet can support a Frequently Asked Questions component that will reduce the call traffic for common questions.

* Formal user training. The bank should have formal training sessions for each department, specifically on the functions of the intranet. The intranet developer can provide details and a logistical plan for executing the sessions. The number and length of the sessions depends on the level of technological expertise of the staff.

* Continuing education. Technology changes interminably. The structure of a bank's intranet will remain fairly constant until a sanctioned change by the bank and developer, but people will continue to find new uses for it. An intranet grows with the organization, and gradually transforms as new uses for it arise. The bank should keep its training program continually active. Fortunately, users can accomplish much of their continuing education on the intranet itself. It can run training programs online for employees, and confirm the appropriate employees are accessing the programs when required.



SOURCE:
http://www.technologyevaluation.com/research/articles/intranets-a-world-of-possibilities-15902/

Is Your Financial Transaction Secure?

Event Summary

You want to start doing on-line banking but you keep hearing about information security incidents that make you skeptical of the process. How do you know if your financial institution has done due diligence to protect your assets from wily hackers, cavalier administrators, and other information technology sepulchers? If a large sum of money disappeared from your account, and banking records indicated that you made the withdrawal, but you know you didn't, how could you prove this? These are questions that consumers should be asking themselves before jumping on-line to do financial transactions.

The FDIC has been protecting financial accounts since 1933, when it was first instituted by Congress in response to the Great Depression. Essentially, the FDIC is a government managed insurance company. Since the FDIC is insuring deposits, it makes sense that they are also concerned with financial systems integrity and network security. Traditionally, the FDIC has been used as a safety-net for bank failures. Since the FDIC began official operations in 1934, at least one bank a year has failed. This year, so far, six banks have failed, according to the FDIC.

Though half a dozen bank closings a year is not impressive, the reasons commonly cited for the closings, "inadequate supervision by the banks board of directors," may concern anyone interested in how banks secure their internal networks. When it comes to system and network security, there are no formal procedures or guidelines for network or information security audits. Banks audit themselves. It is up to the Board of Directors of each bank to provide the FDIC with an information technology and security audit report. The FDIC then reads the report and assigns an URSIT rating. URSIT stands for "Uniform Rating System for Information Technology."

URSIT ratings run on a scale from 1 to 5, with 1 being the highest rating with least degree of concern, and 5 being the lowest rating with most degree of concern. URSIT ratings are only assigned every other year, and only began being assigned this past April. With technology changing so quickly, and the pace at which financial institutions are jumping on-line, one wonders if once every 24 months is enough. Furthermore, if a bank receives an egregious URSIT rating of 5, which holds the description "Risk management processes are severely deficientand strategic plans do not exist or are ineffective." wouldn't you want to know this before doing on-line business with them? Unfortunately, URSIT ratings are not available to the general public.

In a letter dated August 24, 1998, to all CEOs and CIOs of national banks, the Office of the Comptroller of the Currency, (the OCC) stipulated that "To manage strategic risk, banks should establish an effective planning process to implement and monitor PC banking systems." This simply means that banks must have a process. What that process involves is very loosely defined. Our understanding is that the majority of banks don't have the expertise to do their own security audits. An assumption is made that if this is the case, the majority of banks outsource network vulnerability assessments. But how can one be sure that their bank is actually outsourcing network vulnerability assessments to reliable security consultants?

As an example, in a recent security audit done by a major bank in the U.K. for a new e-commerce site, the security auditor only scanned TCP ports and failed to scan any of the e-commerce site's UDP ports. What this means is that the security audit as defined by the consultant was only half-way useful since there are many well-known exploits of UDP ports that hackers can take advantage of that were not taken into consideration. In general, the depth of the security audit will vary by consulting firms. Every company defines their own audit procedure, if they have any. It is not uncommon for companies to create "procedures" in the midst of a business opportunity.

While, the FDIC acknowledges the seriousness of the situation, it admits that it is currently too bogged down with Y2K concerns to take any action on system and network security. The FDIC further concedes that after the 1st of the year, the FDIC will step up the amount of person power put into managing system and network security regulations for financial institutions. In the meantime, the FDIC assures people, "All deposits are insured by the FDIC, so the public should not be concerned with URSIT ratings."

Market Impact

For corporations planning on going on-line and signing up with a financial institutions "on-line store service," there is little information that can be gleaned to help understand how safe a financial institution's on-line transaction systems are. With internet usage expected to exceed 500 million 1 by the year 2000, and on-line investing accounts tripling in the next four years, there is much to be concerned about. For companies selling system and network security technologies, the market is ripe for the picking. There are enough potential customers and a big enough market out in the wild, wild, west of on-line banking and electronic commerce to keep even the most remedial security consultants working overtime.



SOURCE:
http://www.technologyevaluation.com/research/articles/is-your-financial-transaction-secure-15290/

ATM Machines Hacked in Moscow

Event Summary

According to the Moscow Times, hundreds of ATM Pin codes have been stolen in the last few weeks from Moscow's ATM network. These cybercriminals have then used these codes to empty bank accounts down to the last dollar or Deutschemark from other ATMs around the world. Russian and German law-enforcement agencies are in the midst of a joint investigation in what is believed to be a single crime ring. In confirmation to the Moscow Times, Marcel Hoffman, a spokesman for the Federal Association of German Banks, confirmed that hundreds of letters of warning had been sent to expatriates alerting them that their ATM pins had been hacked.

An editorial in the Moscow Times called for the banks to stand up to ATM fraud. Russian bank officials are brushing off the accusations with denials and much verbiage about "first-class security systems." The lack of a concerned response from Russian Banking officials is sure to affect the revenue coming into Moscow.

Methodologies of ATM Hacking

This is not the first case of ATM fraud. In October of '96, a gang of seven businessmen, two from Tel Aviv, and five from Poland, were found guilty of withdrawing a total of 600,000 Israeli Sheqels, equivalent to U.S. $200,000. The businessmen had purchased tens of thousands of blank plastic ATM cards in Greece, and later recorded the magnetic codes on the back of the card. An Israeli computer expert, Daniel Cohen, had obtained the codes and assisted with the magnetic stripe manufacturing. Magnetic stripe writers, and readers, can be purchased for about $300.00.

There are sometimes three, but usually two tracks on a magnetic stripe and many fields within each track. Though most banks typically ignore track one, they sometimes put the card holder's name in the fifth field. The account number is usually stored in the second field of track two. The PIN verification field is usually held in field nine on track one or field six on track two. With a magnetic stripe reader, a stolen card's stripe can be read and recorded, and later put on a new card with a magnetic stripe writer. Or if you know what numbers you want to put in what fields, you can write another person's account number on your own card, and use your own pin to loot their account. Encrypted account numbers can be unencrypted by savvy cryptographers.

There are multiple ways that ATM systems can be compromised. In a paper entitled "Why Cryptosystems Fail," by Nikos Drakos of the Computer Based Learning Unit at the University of Leeds, Drakos describes multiple ways that ATM systems can be hacked. Drakos states that one method for hacking ATM financial networks relies on the fact that many banks do not encrypt or authenticate the authorization response to the ATM. This means that if an attacker finds a way to record a "pay" response from the bank to the machine, a feat that can be accomplished by protocol sniffing on compromised network wires, the attacker could then keep on replaying the "pay" response until the machine is empty. This technique is known as "jackpotting."

Several years ago, ATM fraud occurred at a bank in New York in which a disgruntled ex-employee stole over $80,000. After shoulder surfing for customer PINs, he used discarded bank receipts to associate the PIN with an account number, and was able to later enter these numbers into the ATM, and use his own PIN to withdraw money. Presumably he did this by using a magnetic striper writer.

Some bank ATMs can be hacked by observing a person's PIN number, then inserting a phone card. The ATM machine believes that the previous card has been inserted again, and when the PIN is entered, money is then made available for withdrawal.

The fastest growing modus operandi for hacking ATM terminals is to use false decoy terminals to collect customer card and PIN data. Attacks of this kind were first reported in the United States as early as 1988. With a bit of engineering, criminals can build vending machines which accept any card and PIN, and dispense say a packet of cigarettes. They put their invention in a shopping mall, and harvest PINs and magnetic strip data through a modem built into the vending machine.

There have even been cases of people installing second-hand ATMs purchased from banks. These ATMs are installed in public places such as new shopping malls. Unsuspecting consumers insert their cards, punch in their PINs and get a message saying, "Sorry, unable to dispense cash at this time." In the meantime, criminals have used the ATM log files to get a list of card numbers and PIN codes, which they can then use to create bogus cards and withdraw money.

Recommendations

How prevalent is ATM fraud? If we weren't seeing a significant amount of reports on it, the FBI wouldn't have so many ATM fraud warnings on its website. Here are some ways that ATM fraud can be reduced:

*

ATM fraud is growing. Banks need to be held responsible for any technology risks they put in the hands of consumers.

*

As banks become aware of weaknesses in traditional ATM technologies, new security paradigms need to be put into place. Non reusable authentication systems, such as time based token authentication systems, or non-reuseable passwords would be an improvement over most current ATM systems.

*

The US Federal Reserve requires banks to refund all disputed transactions unless they can prove fraud by the customer. If you believe that your account has been victimized by fraudulent activity, report it to your bank at once.

*

If traveling abroad, don't use your ATM card. Use old-fashioned reliable Traveler's Checks.

*

When using an ATM card, anywhere, do not leave your receipt behind, especially if your bank prints your entire account number on the receipt.

*

Putting shredders in ATM booths would be a good preventative for dumpster divers looking for account numbers on discarded receipts.

*

Don't put your ATM card in a public vending machine.

*

If your ATM card is lost or stolen, report it to your bank ASAP so that they can deactivate it.

*

Reconcile your bank account monthly and report any discrepancies immediately.



SOURCE:
http://www.technologyevaluation.com/research/articles/atm-machines-hacked-in-moscow-15164/

The Art Of Distributed Development Of Multi-Lingual Three-Tier Internet Applications

Introduction

In this article we describe the author's experience with the unconventional development of Internet applications. They were developed for a Suisse bank as a joint cooperation between the author located in Belgrade, Yugoslavia, and a Suisse software development company. The software was developed in a distributed fashion without any physical access to the production site.

Due to the very strict bank's security rules, previously developed applications used by the newly developed applications were not available for installation on the remote development site. For that reason, simple stubs were developed to emulate the behavior of previously developed but unavailable, CORBA (Common Object Request Broker Architecture) and database applications.

In addition, the application had to support multiple spoken languages, thus the developed software had to be internally independent of any particular spoken language. In this article we describe a number of useful tips and tricks of trade that may be helpful to developers facing similar situations. We will describe the three-tier system architecture, the development of CORBA and database portions of the applications, and present tips on multi-lingual application development.

System Architecture

Figure 1 depicts the three-tier system architecture typical for Internet applications. Users use web browsers to access various online banking applications via the Internet. Applications are executed by a web server. An example of such an application is the quotation of currency exchange rates. The user selects desired currencies and a branch of bank on a query input form and submits the query. The web server accepts the query, processes it, and returns the response back to the user's browser. Depending on the particular application, the web server may consult with a CORBA application server and/or a database server. The response is returned in the user's language of choice (German, French, Italian, or English).

Figure 1. System architecture

Per the bank's internal software development standard, all Internet applications, executed by the web server, are written using Java programming language and Java servlets. Although people from the Microsoft camp will most certainly disagree, this is a de facto standard for writing serious Internet applications.

All servers in the production environment run Sun Microsystems Solaris UNIX operating system. The web server is NES (Netscape Enterprise Server) with the addition of JRun engine for running Java servlets. The database server is Oracle. The CORBA application server is IONA OrbixWeb. CORBA clients use an internally developed API (Application Programming Interface) and wrapper Java classes developed on top of OrbixWeb.

The challenge in this project was to develop software in a distributed fashion without any physical presence at the production site, while still adhering to the very strict bank's security rules.

First, per the bank's development protocol, software developers do not have direct physical access to the production system. Instead, the developed software is handed over to the production system staff for final testing and installation on the production system.

Second, for security reasons, the already developed software cannot be taken out of the bank's premises for installation on a remote development system. It means that copies of the database and the applications running on the CORBA server were not available at the development site. Instead, stubs had to be developed to emulate the behavior of CORBA and database servers.

The entire software described in this article was developed on a single Windows NT Workstation. The CORBA server was one that comes with the JDK (Java Development Kit). The database was Microsoft Access. The web server was Apache with the JServ engine for running Java servlets. Software was developed using Oracle JDeveloper IDE (Integrated Development Environment). Obviously, the development and the production environments were very different, which was one of the development challenges.

The only contact between the development and the production sites was over the phone and via e-mail, thus software was developed solely in a telecommuting fashion. Once the database and CORBA stubs were set-up on the development site, and a basic skeleton of the application was set-up on the production site, it was easy to gradually build the application on the development site and test it on the production site. Software was shipped via e-mail in the form of compiled JAR (Java Archive) files and static text, HTML (Hyper-Text Markup Language), and graphic files.

Throughout the rest of this article, we will describe some of the tricks of trade used to overcome the development challenges.

CORBA Implementation

CORBA standard was, at least in theory, developed to standardize invocation of remote applications in networks. However, in practice, this is far from reality. In theory, the development of code which invokes remote, already developed applications, involves the following steps:

1. Use the remote application's IDL (Interface Definition Language) specification, and an IDL compiler to generate API stub code for invoking remote applications in the desired programming language (Java, C, C, C++).

2. Develop code for initiating the ORB (Object Request Broker) within the calling application under development.

3. Develop code for invoking remote applications from within the calling application under development.

However, in practice, there are a number of problems:

* CORBA applications developed in different programming languages may have problems talking to each other even when the development tools and the underlying libraries are produced by the same vendor.

* Java API specification is developed to standardize API and IDL stubs of all Java applications, thus maximize code portability. However, vendors of CORBA development tools like IONA did not adhere to this standard, so software developed using JDK and it's IDL compiler and CORBA name server cannot run using IONA's CORBA name server.

* Even different CORBA development tools of the same vendor, like IONA's OrbixWeb and Orbix 2000, are not mutually compatible, and require different application code.

Fortunately, the differences and incompatibilities in the application code apply mostly to a relatively small portion of the ORB initiation code. For that reason, it was possible to develop code using JDK CORBA environment and port it to the OrbixWeb production environment as follows:

1. Use the JDK IDL compiler to compile IDL specification and generate Java stubs for the development environment.

2. Develop and test the application using the ORB initiation code appropriate for the JDK environment.

3. Use the OrbixWeb IDL compiler to compile IDL specification and generate Java stubs for the OrbixWeb production environment.

4. Replace the ORB initiation code with the code needed for the OrbixWeb production environment.

Once this porting procedure is established, delivery of code modification is very efficient with a little help of code building scripts.

The other problem with the development of CORBA code was unavailability of the original CORBA application server. This problem was solved by developing a simple stub application server that emulates responses of the real application server. The stub server loads test data from a text file and upon request passes it to the CORBA client, i.e., to the requesting servlet in this case.

Database Implementation

Portability of JDBC (Java Database Connection) code is significantly better than CORBA related code. As long as database operations are restricted to standard SQL (Structured Query Language) and free of triggers and stored procedures, developed Java code runs on virtually any database. Porting to a different database type performed by simply specifying a different database source and driver in a database configuration textual file such as:

JDBCDriver = sun.jdbc.odbc.JdbcOdbcDriver
JDBCConnectionURL = jdbc:odbc:DbSource

The above two lines define database source named DbSource defined in the Windows ODBC (Open Database Connection) manager and the Sun Microsystems' JDBC-ODBC bridge database driver. By modifying the two lines in the database configuration file, one can switch from, e.g., Microsoft Access database at the development site to Oracle database at the production site. As a matter of fact, this approach is so convenient that the author used it in many other Java projects that involved databases. Microsoft Access allows quick prototyping and modification. Once the database design is finalized, the database can be ported to Oracle using an Oracle database porting tool. In addition, this approach allows the use of a laptop computer for demonstration of work in progress at a customer's site.

This approach was used in the project described in this article to quickly create the database stub that emulates behavior of the database at the production site. Since the application used a small subset of tables and fields in the actual database, replication of their structure at the development site was quick and easy.

When it comes to database Internet applications, another trick worth mentioning is the use of database connection pools. A typical servlet-based Internet application that uses a database involves three steps when a servlet is invoked: connecting to the database, accessing data, and disconnecting from the database. Connecting to the database is a time-consuming operation. For that reason, pools of pre-established database connections are maintained. Each servlet maintains a connection pool which consists of a configurable number of pre-established database connections. Instead of waiting for the connection to be established, a database request takes an already established connection from the pool, uses it, and later returns it back to the pool for further reuse. The use of connection pools significantly improves the application's performance. Oracle's JDeveloper IDE comes with a library that implements a connection pool manager. However, the author uses one of many connection pool implementations available for download from the Internet.



SOURCE:
http://www.technologyevaluation.com/research/articles/the-art-of-distributed-development-of-multi-lingual-three-tier-internet-applications-16870/