8.5 C
New York
Monday, October 6, 2025
Blog Page 36

What Is Star Topology? Advantages And Disadvantages Of A Star Topology

Short Bytes: Star Topology is the most commonly used network topology. Well, it has got some advantages/disadvantages

star topology

What is Star Topology?

A star topology is a network topology in which all the network nodes are individually connected to a central switch, hub or computer which acts as a central point of communication to pass on the messages.

In a star topology, there are different nodes called hosts and there is a central point of communication called server or hub. Each host or computer is individually connected to the central hub. We can also term the server as the root and peripheral hosts as the leaves.

In this topology, if nodes want to communicate with a central node, then they pass on the message to the central server and the central server forwards their messages to the different nodes. Thus, they form a topology like the representation of a star.

How does communication happen in a Star topology?

Let’s say all the computers of a floor are connected to a common hub or switch. The switch maintains a CAM table in this case. The CAM table is Content Addressable Memory where hardware addresses of the all the connected devices are stored inside a memory in the switch.

For example, if computer A wants to send a data packet to computer B then computer A will forward the message to the switch. The switch will check the address of the destination computer and forward the message to the same.

In the case of a hub, a hub has no memory of its own. So when computer A sends a message to computer B, then hub announces “Hello all the ports connected to me, I have got a packet for this address. Who of you has this address?” This procedure is called ARP (Address Resolution Protocol) and using this network protocol the hub is able to find the address of the intended machine and hence, it transfers the packet to the destination machine.

Advantages of Star Topology:

  • Less damage in case of a single computer failure as it does not affect the entire network

Disadvantages of Star topology:

  • More cables are required to be connected because each computer individually connects to the central server
  • Single point of failure in case the server get down.

Kingston’s ‘Unhackable’ DataTraveler USB Drive Self-destructs With Incorrect PIN Entry

kingston-builds-super-secure-encrypted-usb-protected-with-keypad-498681-3

 

Short Bytes: Kingston Digital, one of the world leaders in memory products, has released DataTraveler 2000 encrypted USB Flash drive. This portable memory device offers best-in-class security features like hardware encryption and PIN protection with an onboard keypad. The device is expected to ship in Q1 in 16GB, 32GB and 64GB capacities.

At CES 2016, Kingston has announced a new USB drive that’ll make the life easier for the privacy concerned users. This secure DataTraveler 2000 encrypted USB Flash drive is created to provide the best possible security measures to the IT professionals for carrying sensitive documents.

The USB drive looks impressive right from the outside. As you pull out the outer aluminum cover, a built-in keypad will be there to surprise you. When inserted into a computer, you’ll have to unlock the device by entering the correct PIN. Failing to do so in 10 attempts, the USB will self-destruct — sounds just like the pen drive from Hollywood flicks like Mission Impossible, right?

This USB 3.1 compatible thumb drive offers speeds of up to 135MBps read and 40MBps write. On the security front, DataTraveler 2000 comes with hardware-based full disk AES 256-bit encryption in XTS mode. The drive also protects your data from bruteforce attacks.

Kingston DataTraveler 2000 USB — PIN protection, AES 256-bit data encryption, resists bruteforce attacks

For additional protection, Kingston’s super-secure USB drive features the option of auto-locking the drive by deleting key and password files after 10 invalid login attempts.

“We are excited to add DataTraveler 2000 to our existing lineup of fast and encrypted USB Flash drives for organizations and SMBs,” said Ken Campbell, Flash business manager, Kingston. “It is the perfect option to deploy in the workforce where a uniform encrypted data storage solution that works on many different OS’ are in use.”

This OS independent USB drive works with all popular operating systems, even Android and ChromeOS. The DataTraveler 2000 is available in 16GB, 32GB and 64GB capacities.

The DataTraveler 2000 is expected to hit the markets in the end of 2016 Q1.

Are you excited about this upcoming USB drive from Kingston? Tell us in the comments below.

Top 10 Best Free Data Recovery Software of 2016

ShortBytes: FossBytes brings you a list of the Best data recovery software of 2016 which are totally free. These data recovery tool save a lot of hassles after we accidently delete some important files or we do not take backups before formatting a hard drive. Using these free recovery software, you can recover your data back on your PC.

 best-free-data-recovery-tool
We lose our important data from hard disk by accidentally pressing the Delete key. Sometimes, a software bug or virus can also corrupt your hard disk. In that case, you need the best data recovery software or a recovery tool to recover your important data back at any cost.
At this point of time, a data recovery software come in handy. We have compiled a list of best free data recovery software considering factors such as whether the software can recover RAW, Unallocated, Corrupt or Formatted Hard Disk; its ability to recover from different file systems such as FAT, FAT32, HFS, NTFS etc.; the array of devices supported; time taken for file recovery and user friendliness to name a few. Here is a list of-

Top 10 Best Data Recovery Software 2016 for free:

1. Recuva:

Recuva-BestFreeDataRecoveryTools

The fact that Recuva is on the top of the best data recovery software list, may not come as a surprise to the most of you. Some of the features that bring Recuva on top of the list of Best recovery software software are:

  • Superior file recovery
  • Advanced deep scan mode
  • Secure overwrite feature that uses industry & military standard deletion techniques and,
  • Ability to recover files from damaged or newly formatted files
  • Easy User interface

2. TestDisk:

TestDisk-BestFreeDataRecoveryTools

A list of best data recovery software can hardly be termed as being complete without a mention of TestDisk. Packed with features and a file recovery system that can easily overshadow that of any other data recovery software, TestDisk has a lot to offer for both novices and experts. Here are some of the TestDisk’s features:

  • Allows users to recover/rebuild the boot sector
  • Fix or recover deleted partition table besides being able to reliably undelete files from FAT, exFAT, NTFS and ext2 file systems.
  • Available on all major platforms such as Microsoft Windows, Mac OS X etc and is in fact quite popular as it can be found on various Linux Live CD’s.

Although being a command line tool, TestDisk may not be suitable for some users for data recovery.

3. Undelete 360:

 Undelete360-BestFreeDataRecoveryToolsWith the looks of a typical Office application, the Undelete 360 is built on a fast yet efficient algorithm which enables the user to undelete files. Here are some of the features of Undelete 360:

  • Works on a variety of devices such as Digital cameras, USB’s etc.
  • It includes a data-wiping tool, a Hex Viewer along with the ability to preview files before recovery.
  • Does a great job in recovering recently deleted files as compared to other free data recovery software
  • Also able to recover files of a wide variety of types such as DOC, HTML, AVI, MP3, JPEG, JPG, PNG, GIF, etc.

However, scanning speed needs major improvement and it also lags out its competition in terms of recovering data.

4. PhotoRec:

PhotoRec-BestFreeDataRecoveryTools

Definitely one of the best data recovery software out there, PhotoRec is widely acclaimed for its powerful file recovery over a wide variety of devices ranging from digital cameras to hard-disks. Here are some of the features of PhotoRec recovery tool:

  • Compatible with almost all major platforms such as Microsoft Windows, Linux, Mac OS X etc.
  • Comes packed with the ability to recover more than 440 different file formats.
  • Features such as the ‘unformat function’ and the ability to add your own custom file types do come in handy.

Although I wouldn’t advise this free data recovery software to beginners as it is completely devoid of a GUI and uses a command line interface which may intimidate some users.

5. Pandora Recovery:

PandoraRecovery-BestFreeDataRecoveryTools

Pandora recovery is one of the most reliable and effective best free data recovery software out there. Pandora recovery tool has a lot to offer to its users. Here are some of the features of this tool:

  • Ability to recover deleted files from NTFS and FAT-formatted volumes
  • Preview deleted files of certain types (image and text files) without performing recovery
  • Surface scan ( which allows you to recover data from drives that have been formatted) and the ability to recover archived, hidden, encrypted and compressed files it packs quite a punch.
  • Its interface is very easy to get a hang of and provides an explorer-like view along with colour coded & recovery percentage indicators.

However, its file detection system is not that reliable and needs to be improved further. The software could be made portable as well so that it doesn’t consume any space on the hard disk and thereby not consume space that a file which we wish to recover once consumed.

6. MiniTool Partition Recovery:

Standard undelete programs like Recuva, Pandora etc. are perfect for recovering a few delted files, but what if when you have lost an entire partition? Then you will probably need a specialist application like MiniTool Partition Recovery. Here are some of the great features of this recovery tool specialized in partition recovery:

  • An easy wizard-based interface
  • Specialized in data recovery on an entire partition
  • Point MiniTool Partition Recovery tool at the problematic drive and it will scan for the missing partition.
  • Generates a recovery report which will let you know what the program has found to help you in data recovery
  • Can’t use data recovery on a bootable disc here.

7. Wise Data Recovery:

Wise data recovery tool is one of the fastest undelete tools among the best data recovery softwares. Besides being faster, it also comes with some of nice features. Here is a list of it’s features:

  • Easy and an intuitive interface
  • Can recover deleted files from local drives, USB drives, cameras, memory cards, removable media devices etc.
  • Faster search filter by selecting in-built file extension groups using the file’s type.
  • Compatible from WIndows XP to WIndows 8

Although the scanning is fast, the program has no deep scan mode which it could mean a slightly reduced chance of recovering the most hard to recover files.

8. Puran file Recovery:

Puran file recovery works in 3 main recovery modes. These recovery modes are:

  • Default Quick Scan (It simply reads the FAT or NTFS file system for deleted files from the recycle bin etc.)
  • Deep Scan (includes scanning all available free space) and,
  • Full Scan (checks all space on the device for the best chance of recovery)
  • Works from Windows XP to Windows 8

Using the “Find lost files” option turns Puran File Recovery into a tool to recover all files from a lost or damaged partition. Something else you can do is edit the custom scan list which stores file signatures for more accurate recovery of badly damaged data.

9. PC Inspector File recovery

PC Inspector File Recovery Works well on both FAT and NTFS drive even if the boot sector has been erased or damaged. Here are some of the features of this recovery tool.

  • Simple search dialog to help locating files by name
  • Recovered files can be restored in a local hard disk or network drives.
  • Can recover image and video of several types of files in different formats such as ARJ, AVI, BMP, DOC, DXF, XLS, EXE, GIF, HLP, HTML, JPG, LZH, MID, MOV, MP3, PDF, PNG, RTF, TAR, TIF, WAV and ZIP.
  • Can scan just specific areas of the disc with the Cluster scanner
  • Works perfectly from Windows XP to Windows 7

However the interface is a little confusing mess of tabs. So, be careful with this tool.

10. Restoration

Restoration data recovery program take the final position in the list of top 10 best data recovery software tool. It is no different and is similar to the other free undelete apps on this list. Even if it is on the tenth position, here are a few things that we liked about this data recovery tool:

  • Very simple and easy to use
  • No confusing and no cryptic buttons or any complicated file recovery procedures
  • It can recover data and files from hard drives, memory cards, USB drives, and other external drives as well.
  • Does not need to be installed and can run data recovery from a floppy disk or USB drive.
  • Supports Windows Vista, XP, 2000, NT, ME, 98, and 95 and also, successfully tested Windows 7 and Windows 10.
  • Sometimes, runs into a problem with Windows 8

 

I would personally recommend Piriform’s Recuva to all our readers hands down. With superior file recovery , an advanced deep scan mode, a secure overwrite feature that uses industry & military standard deletion techniques and the ability to recover files from damaged or newly formatted files , Recuva is undeniably one of the best free data recovery tools out there. Its portability (the ability to run without installation) is one feature that sets it apart from the others.

The User Interface wouldn’t let you down either with a file-recovery wizard and an application manual mode available to your disposal which provides colour coding (indicating the probability of the recovery of a file) along with the ability to preview files before undeleting them. Recuva is definitely a notch above all others and undoubtedly the most complete and reliable free data recovery software available today.

Your WiFi Router Has a Superpower You Didn’t Know

Do you know that your WiFi router has a secret magical power? Let’s tell you about it in detail.

 visualisation-wi-fi-network
The electrical engineers from the University of California has found a way by which the number of people in a room could be calculated by the WiFi power measurement. This reminds me of the mobile spying technology used by Batman and Fox in the Dark Knight movie.“Our approach can estimate the number of people walking in an area, based on only the received power measurements of a WiFi link,” said Mostofi, a professor from the University of California. With this approach, we don’t need any WiFi enabled electronic communication device to count the people.

To count the number of people, the researchers put two WiFi nodes located in the opposite directions in a 70-sq-meter area. Now by calculating the power measurement in that particular area by those nodes, the number of people were accurately estimated up to nine people- in both outdoor and indoor locations. When people crossed the team between these two nodes, the WiFi signal dropped a bit and vice-versa.

By examining the signal variation and behavior of signals, the researchers were able to derive a mathematical method to estimate the number of persons in that area.

Mostofi said, “This is about counting walking people, which is very challenging, counting this many people in such a small area with only WiFi power measurements of one link is a hard problem, and the main motivation for this work.”

countingpeop

This finding related to WiFi signals could be used in various applications like estimation of people to make smart buildings, air-conditioning capacity etc. Apart from these, this could also be used in security related areas.

How To Set Up A VPN In Windows 10: The Ultimate Guide

Short Bytes: How to set up a VPN in Windows 10? — This question is often asked by many Windows 10 users. This is so because Windows 10 has different network settings user interface than its previous Windows versions. Apart from that, to set up a VPN in Windows 10 takes different procedural steps. So, follow our guide to know how to set up a VPN connection in Windows 10.

Avirtual private network (VPN) is a set up to access different public networks across different countries. A VPN extends a private network across a public network, such as the Internet.Here are steps on to set up a VPN in Windows 10:

How to set up a VPN in Windows 10:

Before following this procedure, you need to log in to your Windows 10 with administrative privileges and then follow the below steps

  • Open Settings of your Windows 10 computer to get started with setting up a VPN in Windows 10.

Settings-Windows-10

  • Click on “Network and Internet” icon and open the relevant settings.

Network-Internet-settings-Windows-10-300x160@2x

  • On the left panel, click on VPN and VPN setup window will open.

How-To-Set-Up-A-VPN-In-Windows-10-300x206@2x (1)

  • Click on “Add a VPN connection” and a new window will open up to set up the VPN in Windows 10.

Add-a-VPN-connection-in-Windows-10-300x146@2x

  • Fill up the following details under “Add a VPN connection” Window
    • Select Windows (built-in) under VPN provider
    • Give a connection name as per your choice
    • Enter server name or Address
    • Under the VPN type, select “Point to Point Tunneling Protocol (PPTP)”
    • Under the “Type of sign-in info”, select one of the options of your choice
    • Enter Username and password, if necessary
    • Check “Remember my sign-in info” at the bottom to avoid logging in again and again in future
    • Finally, save
  • Now you will see, the new added VPN connection under the VPN Windows

Set-up-a-VPN-in-Windows-10-300x160@2x

  • Click on that newly added VPN connection and click on “Connect” and that will connect you to your server
  • If you want to re-edit the information of your newly added VPN, click onAdvanced options just beside “Connect”
  • Advanced options will show you connection properties of the newly added VPN. Click on Edit to re-edit the VPN information.

Edit-VPN-setup-properties-on-Windows-10-768x408

 

You can also click on “Clear sign-in info”  just below the “Edit” option in the above picture to clear your password/ username or OTP for your VPN connection on Windows 10.

I hope that you found our article “How to set up VPN in Windows 10” useful. If you know some other methods to set up a VPN in Windows 10, or some other trick, let us know in the comments below.

 

 

10 most powerful supercomputers in the world

Super Computing is future for the cloud infrastructure. The order to meet the daily computing needs has skyrocketed in the last few years due to the fiasco of the big data, The more processing power is required in order to meet our requirements, there is development in the term called super computer these so-called super computers will solve our processing power needs

Here is the list of the Top 10 super computers in the world.

Tianhe-2

Tianhe-2

It is the world’s fastest supercomputer according to the TOP500 lists for June 2013, November 2013, June 2014, November 2014, June 2015, and November 2015. Plans of the Sun Yat-sen University in collaboration with Guangzhou district and city administration to double its computing capacities were stopped by a US government rejection of Intel’s application for an export license for the CPUs and coprocessor boards. The Wall Street Journal analysts considered this a blow to Intel and their suppliers sales and a drag to US information technologydevelopment, but concurrently a boost for China’s own processor development and production industry.

Specifications of Tianhe-2

According to NUDT, Tianhe-2 would have been used for simulation, analysis, and government security applications.

With 16,000 computer nodes, each comprising two Intel Ivy Bridge Xeon processors and three Xeon Phi coprocessor chips, it represented the world’s largest installation of Ivy Bridge and Xeon Phi chips, counting a total of 3,120,000 cores. Each of the 16,000 nodes possessed 88 gigabytes of memory (64 used by the Ivy Bridge processors, and 8 gigabytes for each of the Xeon Phi processors). The total CPU plus coprocessor memory was 1,375 TiB (approximately 1.34 PiB).

During the testing phase, Tianhe-2 was laid out in a non-optimal confined space. When assembled at its final location, the system will have had a theoretical peak performance of 54.9 petaflops. At peak power consumption, the system itself would have drawn 17.6 megawatts of power. Including external cooling, the system drew an aggregate of 24 megawatts. The completed computer complex would have occupied 720 square meters of space.

The front-end system consisted of 4096 Galaxy FT-1500 CPUs, a SPARC derivative designed and built by NUDT. Each FT-1500 has 16 cores and a 1.8 GHz clock frequency. The chip has a performance of 144 gigaflops and runs on 65 watts. The interconnect, called the TH Express-2, designed by NUDT, utilized a fat tree topology with 13 switches each of 576 ports.

Tianhe-2 ran on Kylin Linux, a version of the operating system developed by NUDT. Resource management is based on Slurm Workload Manager.

TiTan

TiTan

Titan is a supercomputer built by Cray at Oak Ridge National Laboratory for use in a variety of science projects. Titan is an upgrade of Jaguar, a previous supercomputer at Oak Ridge, that uses graphics processing units (GPUs) in addition to conventional central processing units (CPUs). Titan is the first such hybrid to perform over 10 petaFLOPS. The upgrade began in October 2011, commenced stability testing in October 2012 and it became available to researchers in early 2013. The initial cost of the upgrade was US$60 million, funded primarily by the United States Department of Energy.

Titan is due to be eclipsed at Oak Ridge by Summit in 2018, which is being built by IBM and features fewer nodes with much greater GPU capability per node as well as local per-node non-volatile caching of file data from the system’s parallel file system.

Titan employs AMD Opteron CPUs in conjunction with Nvidia Tesla GPUs to improve energy efficiency while providing an order of magnitude increase in computational power over Jaguar. It uses 18,688 CPUs paired with an equal number of GPUs to perform at a theoretical peak of 27 petaFLOPS; in the LINPACK benchmark used to rank supercomputers’ speed, it performed at 17.59 petaFLOPS. This was enough to take first place in the November 2012 list by the TOP500 organization, but Tianhe-2 overtook it on the June 2013 list.

Titan is available for any scientific purpose; access depends on the importance of the project and its potential to exploit the hybrid architecture. Any selected code must also be executable on other supercomputers to avoid sole dependence on Titan. Six vanguard codes were the first selected. They dealt mostly with molecular scale physics or climate models, while 25 others queued behind them. The inclusion of GPUs compelled authors to alter their codes. The modifications typically increased the degree of parallelism, given that GPUs offer many more simultaneous threads than CPUs. The changes often yield greater performance even on CPU-only machines.

Specifications

Titan uses Jaguar’s 200 cabinets, covering 404 square meters (4,352 ft2), with replaced internals and upgraded networking.Reusing Jaguar’s power and cooling systems saved approximately $20 million. Power is provided to each cabinet at three-phase 480 V. This requires thinner cables than the US standard 208 V, saving $1 million in copper. At its peak, Titan draws 8.2 MW, 1.2 MW more than Jaguar, but runs almost ten times as fast in terms of floating point calculations. In the event of a power failure, carbon fiber flywheel power storage can keep the networking and storage infrastructure running for up to 16 seconds. After 2 seconds without power, diesel generators fire up, taking approximately 7 seconds to reach full power. They can provide power indefinitely. The generators are designed only to keep the networking and storage components powered so that a reboot is much quicker; the generators are not capable of powering the processing infrastructure.

Titan has 18,688 nodes (4 nodes per blade, 24 blades per cabinet), each containing a 16-core AMD Opteron 6274 CPU with 32 GB of DDR3 ECC memory and an Nvidia Tesla K20X GPU with 6 GB GDDR5 ECC memory. There are a total of 299,008 processor cores, and a total of 693.6 TiB of CPU and GPU RAM.

Initially, Titan used Jaguar’s 10 PB of Lustre storage with a transfer speed of 240 GB/s, but in April 2013, the storage was upgraded to 40 PB with a transfer rate of 1.4 TB/s.GPUs were selected for their vastly higher parallel processing efficiency over CPUs. Although the GPUs have a slower clock speed than the CPUs, each GPU contains 2,688 CUDA cores at 732 MHz, resulting in a faster overall system. Consequently, the CPUs’ cores are used to allocate tasks to the GPUs rather than directly processing the data as in conventional supercomputers.

Titan runs the Cray Linux Environment, a full version of Linux on the login nodes that users directly access, but a smaller, more efficient version on the compute nodes.

Titan’s components are air-cooled by heat sinks, but the air is chilled before being pumped through the cabinets. Fan noise is so loud that hearing protection is required for people spending more than 15 minutes in the machine room. The system has a cooling capacity of 23.2 MW (6600 tons) and works by chilling water to 5.5 °C (42 °F), which in turn cools recirculated air.

Researchers also have access to EVEREST (Exploratory Visualization Environment for Research and Technology) to better understand the data that Titan outputs. EVEREST is a visualization room with a 10 by 3 meter (33 by 10 ft) screen and a smaller, secondary screen. The screens are 37 and 33 megapixels respectively with stereoscopic 3D capability.

IBM Sequoia

220px-Sequoia6.1000pix

IBM Sequoia is a petascale Blue Gene/Q supercomputer constructed by IBM for the National Nuclear Security Administration as part of the Advanced Simulation and Computing Program (ASC). It was delivered to the Lawrence Livermore National Laboratory (LLNL) in 2011 and was fully deployed in June 2012.

On June 14, 2012, the TOP500 Project Committee announced that Sequoia replaced the K computer as the world’s fastest supercomputer, with a LINPACK performance of 16.32 petaflops, 55% faster than the K computer’s 10.51 petaflops, having 123% more cores than the K computer’s 705,024 cores. Sequoia is also more energy efficient, as it consumes 7.9 MW, 37% less than the K computer’s 12.6 MW.

As of June 17, 2013, Sequoia had dropped to #3 on the TOP500 ranking, behind Tianhe-2 and Titan.[5] It is still #3 on the TOP500 ranking of November 2014.

Record-breaking science applications have been run on Sequoia, the first to cross 10 petaflops of sustained performance. The cosmology simulation framework HACC achieved almost 14 petaflops with a 3.6 trillion particle benchmark run, while the Cardioid code, which models the electrophysiology of the human heart, achieved nearly 12 petaflops with a near real-time simulation.

The entire supercomputer runs on Linux, with CNK running on over 98,000 nodes, and Red Hat Enterprise Linux running on 768 I/O nodes that are connected to the Lustre filesystem.

K computer

fujitsu k computer

The K computer – named for the Japanese word “kei” (京?), meaning 10 quadrillion. It is a supercomputer manufactured by Fujitsu, currently installed at the RIKEN Advanced Institute for Computational Science campus in Kobe, Japan. The K computer is based on a distributed memory architecture with over 80,000 computer nodes. It is used for a variety of applications, including climate research, disaster prevention and medical research. The K computer’s operating system is based on the Linux kernel, with additional drivers designed to make use of the computer’s hardware.

In June 2011, TOP500 ranked K the world’s fastest supercomputer, with a computation speed of over 8 petaflops, and in November 2011, K became the first computer to top 10 petaflops. It had originally been slated for completion in June 2012. In June 2012, K was superseded as the world’s fastest supercomputer by the American IBM Sequoia and as of November 2015, K is the world’s fourth-fastest computer.

IBM Mira

IBM Mira

Mira is a petascale Blue Gene/Q supercomputer. As of June 2013, it is listed on TOP500 as the fifth-fastest supercomputer in the world. It has a performance of 8.59 petaflops (LINPACK) and consumes 3.9 MW.[3] The supercomputer was constructed by IBM for Argonne National Laboratory’s Argonne Leadership Computing Facility with the support of the United States Department of Energy, and partially funded by the National Science Foundation. Mira will be used for scientific research, including studies in the fields of material science, climatology, seismology, and computational chemistry. The supercomputer is being utilized initially for sixteen projects, selected by the Department of Energy.

The Argonne Leadership Computing Facility, which commissioned the supercomputer, was established by the America COMPETES Act, signed by President Bush in 2007, and President Obama in 2011. The United States’ emphasis on supercomputing has been seen as a response to China’s progress in the field. China’s Tianhe-1A, located at the Tianjin National Supercomputer Center, was ranked the most powerful supercomputer in the world from October 2010 to June 2011. Mira is, along with IBM Sequoia and Blue Waters, one of three American petascale supercomputers deployed in 2012.

Mira supercomputer at Argonne National Laboratory
The cost for building Mira has not been released by IBM. Early reports estimated that construction would cost US$50 million, and Argonne National Laboratory announced that Mira was bought using money from a grant of US$180 million. In a press release, IBM marketed the supercomputer’s speed, claiming that “if every man, woman and child in the United States performed one calculation each second, it would take them almost a year to do as many calculations as Mira will do in one second”.

Trinity

Trinity

he NNSA Office of Advanced Simulation and Computing (ASC) is faced with significant challenges by ongoing technology advancements and must continue to meet the mission needs of the current applications while also adapting to computing technology revolutionary and evolutionary changes. ASC recognizes that the simulation environment of the future will be transformed with new computing architectures and new programming models and has established the development and deployment of a series of Advanced Technology (AT) systems. The ASC roadmap states “work in this timeframe will establish the technological foundation to build toward exascale computing environments, which predictive capability may demand.” It is critical for ASC to both explore the rapidly changing technology of future systems and to provide platforms with more capability and higher performance for predictive capability. Trinity is the first instantiation of an AT system and will achieve a balance between usability of the current simulation codes while also allowing adaptation to new computing technologies and programming methodologies.

The Trinity supercomputer is provided by Cray, Inc. and is based on its XC30 platform architecture. Trinity is a mixture of Intel Haswell and Knights Landing (KNL) processors. The Haswell partition provides a natural transition path for many of the legacy codes running on the Cielo supercomputer, Trinity’s predecessor. In order to effectively use the KNL processor to its full potential, the ASC code teams to must expose higher levels of thread- and vector-level parallelism than has been necessary for the traditional multicore architectures. To help facilitate this transition, the Trinity Center of Excellence was established, with staff from the ASC tri-Labs, Cray, and Intel.

Trinity introduces tightly integrated nonvolatile “burst buffer” storage capabilities. Embedded within the high-speed fabric are nodes with attached solid-state disk drives. The burst buffer capability will allow for accelerated checkpoint/restart performance and relieve much of the pressure normally loaded on the back-end storage arrays. In addition, the burst buffer will support novel new workload management strategies such as in-situ analysis, which opens a whole space in which projects can manage their overall workflows.

Trinity also introduces advanced power management functionality that allows monitoring and control of power consumption at the system, application, and component levels. Although advanced power management is not needed for the current power and operational budget, its functionality is being used to gain a better understanding for future system requirements and features.

Trinity High-level Technical Specifications

Operational Lifetime

2015 to 2020

Capability

8x to 12x improvement over Cielo in fidelity, physics, and performance capabilities

Architecture

Cray XC30

Memory capacity

>2 PB of DDR4 DRAM

Peak performance

>40 PF

Number of compute nodes

>19,000

Processor architecture

Intel Haswell & Knights Landing

Parallel file system capacity (usable)

>80 PB

Parallel file system bandwidth (sustained)

1.45 TB/s

Burst buffer storage capacity (usable)

3.7 PB

Burst buffer bandwidth (sustained)

3.3 TB/s

Footprint

<5,200 sq ft

Power requirement

<10 MW

piz daint

piz daint super computer

CSCS organized March 24-27 a 4-day training course on “Piz Daint” CSCS hybrid Cray XC30 system. “Piz Daint” has 5.272 compute nodes (with Intel® Xeon® E5-2670 and NVIDIA® Tesla® K20X) and a peak performance of 7.8 Petaflops. The presentations have being given by experts from Cray, NVIDIA and Allinea.

hazel hen

hazel hen super computer

The High Performance Computing Center Stuttgart (HLRS), member of the Gauss Centre for Supercomputing, today reported completion of the second upgrade of its supercomputing-installation. The Cray (NASDAQ: CRAY) XC40 system at HLRS – code named “Hazel Hen” – delivers a peak performance of 7.42 Petaflops, almost twice as much as the previous system known as Hornet. Hazel Hen marks the final expansion stage as defined in HLRS’s system roadmap and is now officially open for operation and available to support the national and European scientific and industrial users.

Hazel Hen is powered by the latest Intel Xeon processor technologies and the CRAY Aries Interconnect technology leveraging the Dragonfly network topology. The installation encompasses 41 system cabinets hosting 7,712 compute notes with a total of 185,088 Intel Haswell E5-2680 v3 compute cores. Hazel Hen features 965 Terabyte of Main Memory and a total of 11 Petabyte of storage capacity spread over 32 additional cabinets hosting more than 8,300 disk drives which can be accessed with an input-/output rate of more than 350 Gigabyte per second.

As with its previous installation, HLRS had Hazel Hen vigorously tested the new systems prior to declaring it “up and running”. Scientists of the Institute of Aerodynamics (AIA) at the RWTH Aachen University leveraged HLRS’s new HPC platform for studies within the scope of the special research project (Sonderforschungsprojekt) SFB/TransRegio 129/Oxyflame which aims at reducing the emission of CO2 by conventional coal-fired power plants through oxy-fuel combustion.

Simulating the heating processes of coal dust, the scientists aimed at gaining a better understanding about the conditions causing the carbon dust to ignite in an oxygen-carbon dioxide atmosphere. Calculations of such scenarios are extremely complex since carbon particles are of irregular, non-spherical shape which is why their motion is difficult to predict. Hazel Hen allows for the simulation of thousands of fully dissolved carbon particles moving freely in a turbulent flow. “Thanks to the computing capacities offered by Hazel Hen, we are able to execute calculations with particle numbers of a magnitude that up to now would have required several individual simulation steps,” explains principal investigator Dr. Matthias Meinke of the AIA.

Researchers of the Institute for Applied Materials (IAM) of the Karlsruhe Institute for Technology (KIT) and of the Institute of Materials and Processes (IMP) of the Karlsruhe University of Applied Sciences are leveraging the computing capacity of Hazel Hen for numerical simulations of solidification processes using the phase-field method. They simulated the ternary eutectic directional solidification of Al-Ag-Cu (Aluminum – Silver – Copper) in an area of 4116 x 4008 x 1000 cells with the aim to study the resulting patterns and the 3D-development of the micro structure. Using 171,696 compute cores of Hazel Hen, the researchers used the computing capacity of the new HLRS supercomputer almost to its full extent. Ternary super-alloys with defined properties for high-performance materials are e.g. of growing importance in the aerospace industry. A solid understanding of the material- and process parameters of the solidifying process is thus indispensable for which simulations like the ones executed on Hazel Hen provide valuable insight.

“With Hazel Hen, we again are in the favourable position of being able to offer our users a state-of-the-art HPC system that meets their requirements,” explains Professor Dr.-Ing. Michael M. Resch, Director of the HLRS. “The first user projects already did deliver outstanding results and we are confident for Hazel Hen to achieve further simulation high- lights in the future.”

With the installation of Hazel Hen, HLRS completed the last step of its system roadmap as defined with the current purchasing plan by the German Federal Ministry of Education and Research and the federal states of Baden-Württemberg, Bavaria and North Rhine- Westphalia. This purchasing plan specified the step-by-step installation and expansion of Tier-0 HPC systems at the three national German high-performance computing centres in Stuttgart (HLRS), Garching near Munich (Leibniz Supercomputing Centre/LRZ) and in Jülich (Jülich Supercomputing Centre/JSC) to ensure Germany’s competitiveness in the global HPC arena.

The Gauss Centre for Supercomputing (GCS) combines the three national supercomputing centres HLRS (High Performance Computing Center Stuttgart), JSC (Jülich Supercomputing Centre), and LRZ (Leibniz Supercomputing Centre, Garching near Munich) into Germany’s Tier-0 supercomputing institution. Concertedly, the three centres provide the largest and most powerful supercomputing infrastructure in all of Europe to serve a wide range of industrial and research activities in various disciplines.

Shaheen 2

Shaheen supercomputer

Shaheen consists primarily of a 16-rack IBM Blue Gene/P supercomputer owned and operated by King Abdullah University of Science and Technology (KAUST). Built in partnership with IBM, Shaheen is intended to enable KAUST Faculty and Partners to research both large- and small-scale projects, from inception to realization.

Shaheen, named after the Peregrine Falcon, was the largest and most powerful supercomputer in the Middle East and is intended to grow into a petascale facility by the year 2011, Originally built at IBM’s Thomas J. Watson Research Center in Yorktown Heights, New York, Shaheen was moved to KAUST in mid-2009.

The father of Shaheen is Majid Alghaslan, KAUST’s founding interim chief information officer and the University’s leader in the acquisition, design, and development of the Shaheen supercomputer. Majid was part of the executive founding team for the University and the person who also named the machine

Shaheen includes the following functional elements:

16 racks of Blue Gene/P, having a peak performance of 222 Teraflops
164 IBM IBM System x 3550 Xeon nodes, having a peak performance of 12 Teraflops
Performance
Shaheen’s performance and computing capabilities include:

65,536 independent processing cores.
A next generation data center that is able to scale to exascale computing requirements
10 Gbit/s access to world’s academic and research networks.
The file system and tape drive will be mounted across both the Blue Gene system and the Linux cluster. All elements of the system will be connected together on a common network backbone that is accessible from all campus buildings. The systems will also be accessible from the Internet.

Services
The Shaheen system at KAUST Supercomputing Laboratory (KSL) is available to help KAUST users and projects, to provide training and advice, to develop and deploy applications, to provide consultation on best practices and to provide collaboration support as needed.

KAUST Faculty will have access to:

General support for Shaheen facility use, including usage scheduling of Shaheen and peripheral systems
High-performance computing support for “Grand Challenges” by collaboration with the Center to deliver fundamental breakthroughs in specific areas of research
Collaboration to provide high-performance computing applications, middleware, library, algorithm support and enablement services
Applications Enablement where users can task the CDCR to develop, enable, port and scale key applications
High-performance Computing Program Best Practice Management techniques
Participation with KAUST researchers in external projects
Training on high-performance computing systems management, programming, applications tuning and algorithms
Development of missiles land-land
Future Plans
On Monday 17th November 2014 KAUST announced the successor to the Blue Gene/P system that was installed in June 2009. Cray will provide KAUST with a Cray® XC40™ supercomputer with DataWarp™ technology, a Cray® Sonexion® 2000 storage system, a Cray Tiered Adaptive Storage (TAS) system and a Cray® Urika-GD™ graph analytics appliance. The Cray XC40 system at KAUST, with the project name “Shaheen II,” will be 25 times more powerful than its current system. KAUST will significantly augment its world-class academic and research facilities and capabilities to advance scientific discoveries.

Stampede

Stampede

Stampede is one of the most powerful supercomputers in the world. But, what does this mean and why is it important?

Supercomputers complement scientific theory and observation by modeling and analyzing anything that is too large (planets), too small (drug molecules), or too expensive or dangerous (crash tests for cars) to test in the laboratory. Determining where and when earthquakes will strike; exploring which nanomaterials will convert sunlight into energy; and understanding how fast brain tumors grow — these important and complex societal problems require powerful computers like Stampede, which provides a peak performance of nearly 10 petaflops (PF), or nearly 10 quadrillion math operations per second.

Stampede is an important part of NSF’s portfolio for advanced computing infrastructure enabling cutting-edge foundational research for computational and data-intensive science and engineering. Society’s ability to address today’s global challenges depends on advancing cyberinfrastructure.

FARNAM JAHANIAN, HEAD OF NSF’S DIRECTORATE FOR COMPUTER AND INFORMATION SCIENCE AND ENGINEERING

TOTAL PEAK PERFORMANCE

1 petaflop (PF) = 1 quadrillion math operations per second

The most powerful systems in the world are petascale systems, where a massive number of computers work in parallel to solve the same problem. Given the current speed of progress, industry experts estimate that supercomputers will reach one exaflop (one quintillion operations per second) by 2018.

9 Reasons You Should Learn Java Programming

JAVA is one of the most famous programming languages around the world. It was designed to seamlessly run on any platform. From the Java home page,we can observe that more than 1 billion computers and 3 billion mobile phones worldwide run Java.

logo-java

Here are 9 reasons why you should be a  JAVA  programmer.

1)Ease of learning

Java is very powerful and easy to learn even to starters who never wrote a line of code in their entire life. Java looks a lot like English except for some magical new symbols like angle brackets and generics etc. Once a programmer is familiar with initial hurdles like installing JDK and setting up PATH and understanding How Classpath works, it’s pretty easy to write program in Java.

2)Object Oriented Language

One of the main reasons why JAVA is so popular is that it is a Object Oriented Programming language. Developing OOPS application is much easier, and it also helps to keep system modular, flexible and extensible. Once you have knowledge OOPS concepts like Abstraction, Encapsulation, Polymorphism and Inheritance, you can do miracles using Java.

3)Java has Rich API

One more reason of Java programming language’s huge success is it’s Rich API and most importantly it’s highly visible, because it comes along with Java installation.Java provides API for I/O, networking, utilities, XML parsing, database connection, and almost everything. Whatever left is covered by open source libraries like Apache Commons, Google Guava and others.

4)Java got Killer Editors

The IDEs available for Java will blow your mind. Due to its strong typing, you’ll not only be notified immediately of errors, but you’ll also be given suggestions that will refactor and reformat your code with clear explanations and extreme ease. After using them, most people wonder how they ever coded before. Most commonly used editors for Java programming are Eclipse, Netbeans ,Jcreator etc,.

5)Collection of Open Source libraries

Open source libraries ensures that Java should be used everywhere. Apache, Google, and other organization has contributed lot of great libraries, which makes Java development easy, faster and cost effective. There are framework like Spring, Struts, Maven, which ensures that Java development follows best practices of software craftsmanship, promotes use of design patterns and assisted Java developers to get there job done.

6)Huge Community support

Since Java has been around more than 20 years the Java community has grown by a large number. Community is the biggest strength of Java programming language and platform. Nomatter, How good a language is, it wouldn’t survive, if there is no community to support, help and share there knowledge. Java has been very lucky, it has lots of active forums, Stackoverflow, open source organizations and several Java user group to help everything. There is community to help beginners, advanced and even expert Java programmers. Java actually promotes taking and giving back to community habit. Lots of programmers, who use open source, contribute as commiter, tester etc. Expert programmer provides advice FREE at various Java forums and stackoverflow. This is simply amazing and gives lot of confidence to a newbie in Java.

7)Java is Platform Independent

In 1990s, this was the main reason of Java’s popularity. Idea of platform independence is great, and Java’s tag line “write once run anywhere” was enticing enough to attract lots of new development in Java. This is still one of the reason of Java being best programming language, most of Java applications are developed in Windows environment and run in UNIX platform. The fact behind platform independence is that Java runs on a Virtual machine called Java Virtual Machine(JVM) rather than running on the Operating system directly like C, C++ etc,.

8)Java is Omnipresent

Java is running just about everywhere you can imagine. It’s usually where most large applications end up due to its scalability, stability, and maintainability. There’s also currently a gigantic push in the Java community to be the leader of the IoT (Internet of Things). And it’s coming. Very fast. There’ll be a time in the near future when your alarm clock will automatically start brewing your coffee pot, and it’ll most likely be Java doing that.

9)Java is FREE

People like FREE things, Don’t you? So if a programmer want to learn a programming language, or a organization wants to use a technology, COST is an important factor. Since Java is free from start, i.e. you don’t need to pay anything to create Java application. This FREE thing also helped Java to become popular among individual programmers, and among large organizations.

Would you take a course in Java ? What is your favorite Programming language? comment below

Google Is Now Testing New Shapes For Hard Drives That Will Be Taller And Cheaper

Short Bytes: To improve its cloud storage facilities and data centers, Google is now experimenting with new shapes of hard drives. Google aims to improve the overall performance and reduce the operational costs with taller hard drives.

 hard-disk-google-data-center
Ever since their introduction, the mechanical drives haven’t changed much. Their physical size and standard 3.5 and 2.5-inch shape has remained the same for decades. With the introduction of faster SSDs, their dominance is under threat more than ever.

If we talk about data storage applications, the cloud storage is slowly becoming an extension of our computer’s physical drives. As more people are storing data in their cloud accounts, the technology companies are working on improving their capabilities. They have managed to improve the data density of the hard drives within a given space. Another important key factor that plays a key role in cloud storage is reliability.

To improve its cloud facilities and store more data into hard drives, Google has decided to experiment with new shapes. We know that Google owns mammoth data centers where it develops and uses its own server designs. These servers depend upon the 3.5-inch drives and they face a risk to lose data.

In a recent paper published at Google Cloud Platform Blog, the company tells that it needs to explore changes like taller drives and grouping on disks. The company also mentions the need to “optimize the collection of disks, rather than a single disk in a server.”

With these experiments and changes, Google aims to improve the overall performance apart from lowering the operational costs. One interesting fact — Google doesn’t seem much concerned with the reliability of the new drives. If you dare to ask why — it’s because cloud storage systems have their backups built in.

“We hope this is the beginning of both a new chapter for disks and a broad and healthy discussion, including vendors, academia and other customers, about what “data center” disks should be in the era of cloud,” Google writes.

What are your views regarding this development in HDDs? Share your views in the comments below.

How White Hat Hackers Hacked An Offline Laptop In Another Room Within Seconds

Short Bytes: The researchers from Tel Aviv University and Technion have found a way to steal encryption keys from “safe” air-gapped machines. The attack was launched from another room and the target was completely offline. Known as side-channel attack, it doesn’t try to break the encryption by exploiting encryption algorithm weakness or brute force attack. Instead, researchers captured the electromagnetic waves emitted during the decryption process.

 

 hacking-offline-computer-capturing-em-waves

In recent times, hackers are aggressively targeting the air-gapped machines, which are considered super-secure. These systems are disconnected from the internet so that a hacker is unable to deploy any attack remotely via internet or any network.

Do you remember the researchers who used a homemade device “PITA” to steal keys and data from your PC? That device captured the stray radio waves emitted by your computer’s processor.

Going one step ahead, the same researchers from Tel Aviv University and Technion have showcased a way (PDF) to squeeze data from such “safe” air-gapped machines. Interestingly, the hacked computer was located in an adjacent room, across a wall.

This attack extracts the secret decryption keys within seconds from the target machine located in other room with the help of lab equipment (antenna, amplifiers, software-defined radio, and laptop) worth $3000. The experts claim that with more lab research, the equipment could be simplified.

The attack was completely non-intrusive and the targets were not touched. Known as side-channel attack, it doesn’t try to break the encryption by exploiting encryption algorithm weakness or brute force attack. Here, the method used to break the encryption was capturing the electromagnetic waves emitted during the decryption process.

hacking-offline-computer-capturing-em-waves-1

After obtaining the private key from a laptop running GnuPG, the researchers sent a specific encrypted message to the target. Now, the EM leakage of the target was measured repeatedly to reveal the key. The secret key was obtained after studying 66 decryption processes (each lasting 0.05 seconds) in 3.3 seconds.

At the moment, such ways to hack systems is very much limited to research. But, researchers feel that in not-so-distant future, hackers could use these techniques by making them more accessible and cheaper.

“Our work is most pertinent to systems that are carefully protected against software attacks, but—as we show—may be wide open to inexpensive physical attacks,” researchers said.

Tips and Tricks to Get The Best Out Of Windows 10

Here here some cool tips and tricks to get more out of Windows 10

From the time, Microsoft launched its new operating system Windows 10 on July 29, it has been in the news for all the wrong reasons. While some complained about the security issues, some said that it is little better than Windows 8. However, not all of the things in Windows 10 are bad.

Windows-10-695x336

Here are some tips and tricks for those who have recently purchased a computer with Windows 10 or have upgraded, that will help you save time and make the operating system more effective and user-friendly.

Use Cortana

You can ask Cortana anything, and you will get your answer. For example, you can ask Cortana question about the weather, the age of your favorite celebrity or the events coming up on your calendar. Cortana, a digital personal assistant not only finds files and launch apps, but can be also used to type in requests for Cortana.

To send emails, you need to simply type in “Send an email to <NAME> about <SUBJECT> and then you can set alarms or reminders as well. You also do not have to open a separate app to do so. You can save a lot of your time, as you can do almost anything with Cortana by just requesting or asking it something.

Get a Longer Battery Life

Are you aware that there is a battery saver mode that will start when the AC power is unplugged? There are some things that you can do to make your battery life last longer. All you need to do is, go to “Start” and then “Settings”, click on “System” and “Battery Saver”. This will allow you to see what apps on your computer are consuming the most battery power. You can then add apps to the list of exclusions, which means that when the battery is low, Windows 10 will restrict those apps. This can really help you when you have very low battery power.

Increase Monitor Size

Windows 10 has an in-built function for a virtual Desktop, which makes it very simple to control the open apps that you have on your computer. All you need to do is to keep moving these open apps to their own Desktop, until you have to access them. In order to see the three Desktops that are the default, press [Windows] + [Tab] and you will be able to get a summary of the open apps on each of the three Desktops. To see the contents, all you need to do is keep the mouse pointer over one of the Desktops at the bottom of the screen.

You can also drag the apps from one Desktop to another Desktop, and then add more Desktops by clicking on “New Desktop”. This is very helpful for those who work from home and who require a lot of windows and apps to be opened at the same time. For instance, if you are looking at multiple types of documents at the same time, or browser apps and Excel or Word. You can also switch easily between the Desktops by pressing [Windows] + [Ctrl].

Using the Hidden Start Menu

With Windows 10, Microsoft brought back the Start Menu. The best feature about the Start Menu is that you can customize in a number of ways. Every one is aware of the left click, but you can now also right-click the Start Menu too. If you right-click on the Start button, it will allow you to access a range of shortcuts that are basically for the more complicated parts of Windows. Just use the right-click option on the Start Menu, if you want to access these advanced parts promptly.

Record Videos of Apps

Use the in-built app Game DVR for taking screenshots and recording videos. Meant for taking screenshots of videos games, Game DVR can be used with any open app and can make screenshots or record any types of videos of anything that you may have opened in Windows 10. It is very simple to use and helpful when you are looking to make a video for a family member or friend on your computer. You can access it through the shortcut by using [Windows] + [G]. However, it might not work through shortcut all the time.

Change File Explorer’s opening folder

In Windows 10, File Explorer opens with Quick Access selected. Old-school Windows users might prefer to start in This PC (previously known as My Computer), which includes the six standard data folders in your user profile as well as any local drives and removable media such as USB drives.

No problem. On the ribbon, open the View tab, click Options, Change folder and search options, and then choose one of these two options.

Use the expanded Send To menu

Yes, you can right-click a file or folder (or multiple items, for that matter) and use the Send To menu to do a few interesting things, like move or copy the selection to your Documents folder, create a compressed file (in .zip format), or send the selection as an email attachment.

Customize the Send To menu

Speaking of the Send To menu, you can make it much more useful by adding and removing the options on the default (short) menu. They’re just shortcuts, but good luck finding them, because they’re buried in a folder hidden deep within your user profile.

To get to that folder, open the Run box (Windows key+R), typeshell:sendto, and then press Enter.

First order of business: delete the Fax Recipient shortcut. After that, you can add shortcuts to favorite folders (local and network). You can also add shortcuts to programs. Adding a shortcut to Notepad or another text editor makes it much easier to quickly edit any file, for example. Ditto for pictures and your favorite image editor.

Hope you like the tricks given above. Please share you own tips and tricks in Windows 10 for the benefit of other readers.