It may surprise you to learn that the practical pursuit of real-life artificial intelligence is not just a recent development. In fact, in 1958, Professor Frank Rosenblatt first postulated the concept of the percepetron—a device built to mimic the biological networked structure found in living things, and similarly able to “learn” as creatures do. Prof. Rosenblatt described his algorithm as “the first machine which is capable of having an original idea” and developed its practical application on the IBM 704—a 36-bit, vacuum-tube computer weighing 10 to 15 tons and costing two million dollars at the time (equivalent to almost $22 million today).
Flash ahead 66 years, and we see what the evolution of machine learning has built on Prof. Rosenblatt’s idea. Faster, more complex and more affordable hardware and software that unlock incredible AI potential that Prof. Rosenblatt could never have imagined. Still, today’s AI training models still train perceptrons on massive datasets, aided by parallel computing that enables multiple machines and processors to work together on a single task. By dividing the task into smaller sub-tasks, this distributed parallel computing can greatly reduce the time required to run AI training.
With demand for both generalized and purpose-built AI models rising, so also is the need for these distributed computing assets to communicate quickly and reliably—and that means fiber-optic network infrastructure.
Leveraging underutilized distributed assets
During slow or off-peak hours, data centers can provide access to these AI training models by dedicating excess capacity to their distributed networks. These assets can be hundreds or thousands of miles apart, so utilizing their spare cycles requires high-count fiber-optic cabling.
Today’s cables can contain as many as 6,912 fibers, amounting to more than a mile of fiber in each foot of cable. Each of those fibers can transmit billions of bits of data per second. With the right infrastructure, the practical limits to distance in distributed computing become less and less important.
The closure that opens up potential
CommScope has introduced a new FOSC® solution to help enable more reliable and flexible distributed networks like these. The FOSC-650 is designed to support splicing capabilities up to 6,912 fibers within its robust, protective enclosure. Installing such high-count fiber infrastructure is an expensive and delicate proposition due to the precise splicing required, and crowded closures can make this much more difficult. That’s why the FOSC-650 is built to accommodate up to 12 cables with diameters up to 1.6 inches each, ensuring easier access for installation and maintenance. Its wide trays provide ample space for spliced fibers and efficient slack management—and are even backward-compatible with FOSC-C and FOSC-D trays.
Protecting these connections is equally vital. Capitalizing on the well-established reliability of our FOSC-450 and FOSC-600 series, the FOSC-650 uses our tested and trusted gel sealing technology that secures with four latches and eliminates the need for application of grease or lubricant when re-opening and re-closing the closure.
By enabling simpler, quicker and more reliable deployments of the highest-count fiber infrastructure, the FOSC-650 supports the fiber infrastructure that drives advanced AI learning—and helps fulfill Prof. Rosenblatt’s groundbreaking ideas about perceptrons, their nature and potential of machine intelligence in practical applications.
To learn more about CommScope’s FOSC-650 solution, check out the specifications and ordering guide. You can also see how easily it installs in this brief video.
© 2024 CommScope, LLC. All rights reserved. CommScope and the CommScope logo are registered trademarks of CommScope and/or its affiliates in the U.S. and other countries. For additional trademark information see https://www.commscope.com/trademarks. All product names, trademarks and registered trademarks are property of their respective owners.