Saturday, December 31, 2011

Nmap Main Scan types -sR

RPC scanning (-sR)

RPC scanning method always works in combination with several of port scan methods of Nmap.  The idea of this is that try to determine the target ports are RPC ports or not.  Decoys are not used in RPC scanning.

#nmap –sR –v

Starting nmap V. 2.54BETA30
Host ( appears to be up ... good.
Initiating Connect() Scan against (
Adding open port 23/tcp
Adding open port 80/tcp
Adding open port 139/tcp
Adding open port 1723/tcp
Adding open port 24/tcp
Adding open port 515/tcp
The Connect() Scan took 1 second to scan 1549 ports.
Interesting ports on (
(The 1543 ports scanned but not shown below are in state: closed)
Port       State       Service (RPC)
23/tcp     open        telnet                 
24/tcp     open        priv-mail              
80/tcp     open        http                   
139/tcp    open        netbios-ssn            
515/tcp    open        printer                
1723/tcp   open        pptp                   

Nmap run completed -- 1 IP address (1 host up) scanned in 1 second

Nmap Scanning and security basics.

The first step of attacking is to get as much information as possible for the target network. An attacker would find bugs of different OSs and available services. This can be done by a toll called scanner. Scanning can be made manually, but it is much easier to automate scanning with scanner tools such as SATAN(Security Administrator’s Tool for Analyzing Networks), Nessus, Nmap and so on.
 Scanners query TCP/IP ports and record the target’s response. They glean valuable information about the target host by determining
. What services are currently run?
. Who owns those services?
. Whether anonymous logins are supported
. Whether certain network services require authentication

Port and port scanning

Port is an access point for an application running on a computer system. All Internet and TCP/IP based networks require a source IP address, destination IP address and source port as well as destination port. There are three kinds of port, which are well-known ports, registered ports and dynamic (private) ports. The well-known ports range from 0 to 1023, the registered ports are those from 1024 through 49151, the dynamic (private) ports are those from 49152 through 65535.

Port scanning uses some specific tools, like Nmap, to automate the identification of active ports on a target system.

Finger printing scanning

Fingerprinting is a technique that tries to identify the target system operating system.  The technique helps an attacker to ascertain each target system host’s OS with a high probability. Once the target system OS is identified, the attacker can concentrate on his effort to compromise it.

Tuesday, December 27, 2011

what is Proxy Searver

A server that sits between a client application, such as a Web browser, and a real server. It intercepts all requests to the real server to see if it can fulfill the requests itself. If not, it forwards the request to the real server.
Proxy servers have two main purposes:

  • Improve Performance: Proxy servers can dramatically improve performance for groups of users. This is because it saves the results of all requests for a certain amount of time. Consider the case where both user X and user Y access the World Wide Web through a proxy server. First user X requests a certain Web page, which we'll call Page 1. Sometime later, user Y requests the same page. Instead of forwarding the request to the Web server where Page 1 resides, which can be a time-consuming operation, the proxy server simply returns the Page 1 that it already fetched for user X. Since the proxy server is often on the same network as the user, this is a much faster operation. Real proxy servers support hundreds or thousands of users. The major online services such as America Online, MSN and Yahoo, for example, employ an array of proxy servers.

  • Filter Requests: Proxy servers can also be used to filter requests. For example, a company might use a proxy server to prevent its employees from accessing a specific set of Web sites

  • How is a server different from a desktop?
    What are storage area networks?
    Can I use a high-end desktop in place of a server?
    How do you know when you need a server?
    proxy server
    Do I need more than one server?
    How much memory and disk space will it need?
    Will I have to replace it in six months?
    How much will a server cost?

    what is Firewwall

    A firewall is a set of related programs, located at a network gateway server, that protects the resources of a private network from users from other networks. (The term also implies the security policy that is used with the programs.) An enterprise with an intranet that allows its workers access to the wider Internet installs a firewall to prevent outsiders from accessing its own private data resources and for controlling what outside resources its own users have access to.
    Basically, a firewall, working closely with a router program, examines each network packet to determine whether to forward it toward its destination. A firewall also includes or works with a proxy server that makes network requests on behalf of workstation users. A firewall is often installed in a specially designated computer separate from the rest of the network so that no incoming request can get directly at private network resources.
    There are a number of firewall screening methods. A simple one is to screen requests to make sure they come from acceptable (previously identified) domain name and Internet Protocol addresses. For mobile users, firewalls allow remote access in to the private network by the use of secure logon procedures and authentication certificates.
    A number of companies make firewall products. Features include logging and reporting, automatic alarms at given thresholds of attack, and a graphical user interface for controlling the firewall.
    Computer security borrows this term from firefighting, where it originated. In firefighting, a firewall is a barrier established to prevent the spread of fire.

    Tuesday, December 20, 2011

    Password Craking

    Password cracking is the process of recovering secret passwords from data that has been stored in or transmitted by a computer system. A common approach is to repeatedly try guesses for the password.
    Most passwords can be cracked by using following techniques :

    1) Hashing :- Here we will refer to the one way function (which may be either an encryption function or cryptographic hash) employed as a hash and its output as a hashed password.
    If a system uses a reversible function to obscure stored passwords, exploiting that weakness can recover even 'well-chosen' passwords.
    One example is the LM hash that Microsoft Windows uses by default to store user passwords that are less than 15 characters in length.
    LM hash breaks the password into two 7-character fields which are then hashed separately, allowing each half to be attacked separately.

    Hash functions like SHA-512, SHA-1, and MD5 are considered impossible to invert when used correctly.

    2) Guessing :- Many passwords can be guessed either by humans or by sophisticated cracking programs armed with dictionaries (dictionary based) and the user's personal information.

    Not surprisingly, many users choose weak passwords, usually one related to themselves in some way. Repeated research over some 40 years has demonstrated that around 40% of user-chosen passwords are readily guessable by programs. Examples of insecure choices include:

    * blank (none)
    * the word "password", "passcode", "admin" and their derivatives
    * the user's name or login name
    * the name of their significant other or another person (loved one)
    * their birthplace or date of birth
    * a pet's name
    * a dictionary word in any language
    * automobile licence plate number
    * a row of letters from a standard keyboard layout (eg, the qwerty keyboard -- qwerty itself, asdf, or qwertyuiop)
    * a simple modification of one of the preceding, such as suffixing a digit or reversing the order of the letters.
    and so on....

    In one survery of MySpace passwords which had been phished, 3.8 percent of passwords were a single word found in a dictionary, and another 12 percent were a word plus a final digit; two-thirds of the time that digit was.
    A password containing both uppercase &  lowercase characters, numbers and special characters too; is a strong password and can never be guessed.

    Check Your Password Strength

    3) Default Passwords :- A moderately high number of local and online applications have inbuilt default passwords that have been configured by programmers during development stages of software. There are lots of applications running on the internet on which default passwords are enabled. So, it is quite easy for an attacker to enter default password and gain access to sensitive information. A list containing default passwords of some of the most popular applications is available on the internet.
    Always disable or change the applications' (both online and offline) default username-password pairs.

    4) Brute Force :- If all other techniques failed, then attackers uses brute force password cracking technique. Here an automatic tool is used which tries all possible combinations of available keys on the keyboard. As soon as correct password is reached it displays on the screen.This techniques takes extremely long time to complete, but password will surely cracked.
    Long is the password, large is the time taken to brute force it.

    5) Phishing :- This is the most effective and easily executable password cracking technique which is generally used to crack the passwords of e-mail accounts, and all those accounts where secret information or sensitive personal information is stored by user such as social networking websites, matrimonial websites, etc.
    Phishing is a technique in which the attacker creates the fake login screen and send it to the victim, hoping that the victim gets fooled into entering the account username and password. As soon as victim click on "enter" or "login" login button this information reaches to the attacker using scripts or online form processors while the user(victim) is redirected to home page of e-mail service provider.
    Never give reply to the messages which are demanding for your username-password, urging to be e-mail service provider.

    It is possible to try to obtain the passwords through other different methods, such as social engineering, wiretapping, keystroke logging, login spoofing, dumpster diving, phishing, shoulder surfing, timing attack, acoustic cryptanalysis, using a Trojan Horse or virus, identity management system attacks (such as abuse of Self-service password reset) and compromising host security.
    However, cracking usually designates a guessing attack.

    Sunday, December 18, 2011

    Operating Systems FILE SYSTEM FAT 32

    In order to overcome the volume size limit of FAT16, while still allowing DOS real-mode
    code to handle the format without unnecessarily reducing the available conventional memory,
    Microsoft decided to implement a newer generation of FAT, known as FAT32, with cluster counts
    held in a 32-bit field, of which 28 bits are currently used.
    In theory, this should support a total of approximately 268,435,438 (< 228) clusters, allowing
    for drive sizes in the range of 2 terabytes. However, due to limitations in Microsoft's scandisk
    utility, the FAT is not allowed to grow beyond 4,177,920 (< 222) clusters, placing the volume limit
    at 124.55 gigabytes, unless "scandisk" is not needed.
    FAT32 was introduced with Windows 95 OSR2, although reformatting was needed to use it,
    and DriveSpace 3 (the version that came with Windows 95 OSR2 and Windows 98) neversupported it. Windows 98 introduced a utility to convert existing hard disks from FAT16 to FAT32
    without loss of data. In the NT line, support for FAT32 arrived in Windows 2000.
    Windows 2000 and Windows XP can read and write to FAT32 filesystems of any size, but
    the format program on these platforms can only create FAT32 filesystems up to 32 GB. Thompson
    and Thompson (2003) write[4] that "Bizarrely, Microsoft states that this behavior is by design."
    Microsoft's knowledge base article 184006[3] indeed confirms the limitation and the by design
    statement, but gives no rationale or explanation. Peter Norton's opinion[5] is that "Microsoft has
    intentionally crippled the FAT32 file system."
    The maximum possible size for a file on a FAT32 volume is 4 GiB minus 1 B (232-1 bytes).
    For most users, this has become the most nagging limit of FAT32 as of 2005, since video capture
    and editing applications can easily exceed this limit, as can the system swap file.

    Operating Systems FILE SYSTEM FAT 16

    Initial FAT16
    In 1984 IBM released the PC AT, which featured a 20 MB hard disk. Microsoft introduced
    MS-DOS 3.0 in parallel. Cluster addresses were increased to 16-bit, allowing for a greater number
    of clusters (up to 65,517) and consequently much greater filesystem sizes. However, the maximum
    possible number of sectors and the maximum (partition, rather than disk) size of 32 MB did not
    change. Therefore, although technically already "FAT16", this format was not yet what today is
    commonly understood under this name. A 20 MB hard disk formatted under MS-DOS 3.0, was not
    accessible by the older MS-DOS 2.0. Of course, MS-DOS 3.0 could still access MS-DOS 2.0 style
    8 KB cluster partitions.
    MS-DOS 3.0 also introduced support for high-density 1.2 MB 5.25" diskettes, which
    notably had 15 sectors per track, hence more space for FAT. This probably prompted a dubious
    optimization of the cluster size, which went down from 2 sectors to just 1. The net effect was that
    high density diskettes were significantly slower than older double density ones.

    Final FAT16
    Finally in November 1987, in Compaq DOS 3.31, came what is today called the FAT16
    format, with the expansion of the 16-bit disk sector index to 32 bits. The result was initially called
    the DOS 3.31 Large File System. Although the on-disk changes were apparently minor, the entire
    DOS disk code had to be converted to use 32-bit sector numbers, a task complicated by the fact that
    it was written in 16-bit assembly language.
    In 1988 the improvement became more generally available through MS-DOS 4.0. The limit
    on partition size was now dictated by the 8-bit signed count of sectors-per-cluster, which had a
    maximum power-of-two value of 64. With the usual hard disk sector size of 512 bytes, this gives 32
    KB clusters, hence fixing the "definitive" limit for the FAT16 partition size at 2 gigabytes. On
    magneto-optical media, which can have 1 or 2 KB sectors, the limit is proportionally greater.
    Much later, Windows NT increased the maximum cluster size to 64 KB by considering the sectorsper-
    cluster count as unsigned. However, the resulting format was not compatible with any other
    FAT implementation of the time, and anyway, generated massive internal fragmentation. Windows
    98 also supported reading and writing this variant, but its disk utilities didn't work with it.

    Operating Systems FILE SYSTEM FAT 12

    This initial version of FAT is now referred to as FAT12. As a filesystem for floppy disks, it
    had a number of limitations: no support for hierarchical directories, cluster addresses were "only"
    12-bits long (which made the code manipulating the FAT a bit tricky) and the disk size was stored
    as a 16-bit count of sectors, which limited the size to 32MB.
    An entry-level diskette at the time would be 5.25", single-sided, 40 tracks, with 8 sectors per
    track, resulting in a capacity of slightly less than 160KB. The above limits exceeded this capacity
    by one or more orders of magnitude and at the same time allowed all the control structures to fit
    inside the first track, thus avoiding head movement during read and write operations. The limits
    were successively lifted in the following years.
    Since the sole root directory had to fit inside the first track as well, the maximum possible
    number of files was limited to a few dozens.

    Operating Systems COMMAND LINE

    Command line interface (or CLI) operating systems can operate using only the keyboard for
    input. Modern OS's use a mouse for input with a graphical user interface (GUI) sometimes
    implemented as a shell. The appropriate OS may depend on the hardware architecture, specifically
    the CPU, with only Linux and BSD running on almost any CPU. Windows NT has been ported to
    other CPUs, most notably the Alpha, but not many. Since the early 1990s the choice for personal
    computers has been largely limited to the Microsoft Windows family and the Unix-like family, of
    which Linux and Mac OS X are becoming the major choices. Mainframe computers and embedded
    systems use a variety of different operating systems, many with no direct connection to Windows or
    Unix, but typically more similar to Unix than Windows.
    • Personal computers
    o IBM PC compatible - Microsoft Windows and smaller Unix-variants (like Linux and
    o Apple Macintosh - Mac OS X, Windows, Linux and BSD
    • Mainframes - A number of unique OS's, sometimes Linux and other Unix variants.
    • Embedded systems - a variety of dedicated OS's, and limited versions of Linux or other OS's
    The Unix-like family is a diverse group of operating systems, with several major subcategories
    including System V, BSD, and Linux. The name "Unix" is a trademark of The Open
    Group which licenses it for use to any operating system that has been shown to conform to the
    definitions that they have cooperatively developed. The name is commonly used to refer to the large
    set of operating systems which resemble the original Unix.
    Unix systems run on a wide variety of machine architectures. They are used heavily as
    server systems in business, as well as workstations in academic and engineering environments. Free
    software Unix variants, such as Linux and BSD, are increasingly popular. They are used in the
    desktop market as well, for example Ubuntu, but mostly by hobbyists.
    Some Unix variants like HP's HP-UX and IBM's AIX are designed to run only on that
    vendor's proprietary hardware. Others, such as Solaris, can run on both proprietary hardware and on
    commodity x86 PCs. Apple's Mac OS X, a microkernel BSD variant derived from NeXTSTEP,
    Mach, and FreeBSD, has replaced Apple's earlier (non-Unix) Mac OS. Over the past several years,
    free Unix systems have supplanted proprietary ones in most instances. For instance, scientific
    modeling and computer animation were once the province of SGI's IRIX. Today, they are
    dominated by Linux-based or Plan 9 clusters.
    The team at Bell Labs who designed and developed Unix went on to develop Plan 9 and
    Inferno, which were designed for modern distributed environments. They had graphics built-in,
    unlike Unix counterparts that added it to the design later. Plan 9 did not become popular because,
    unlike many Unix distributions, it was not originally free. It has since been released under Free
    Software and Open Source Lucent Public License, and has an expanding community of developers.
    Inferno was sold to Vita Nuova and has been released under a GPL/MIT license.
    Microsoft Windows
    The Microsoft Windows family of operating systems originated as a graphical layer on top of
    the older MS-DOS environment for the IBM PC. Modern versions are based on the newer Windows
    NT core that first took shape in OS/2 and borrowed from OpenVMS. Windows runs on 32-bit and
    64-bit Intel and AMD computers, although earlier versions also ran on the DEC Alpha, MIPS, and
    PowerPC architectures (some work was done to port it to the SPARC architecture).
    As of 2004, Windows held a near-monopoly of around 90% of the worldwide desktop market share,
    although this is thought to be dwindling due to the increase of interest focused on open source
    operating systems. [1] It is also used on low-end and mid-range servers, supporting applications
    such as web servers and database servers. In recent years, Microsoft has spent significant marketing


    History of Operating Systems
    An operating system (OS) is a software program that manages the hardware and software
    resources of a computer. The OS performs basic tasks, such as controlling and allocating memory,
    prioritizing the processing of instructions, controlling input and output devices, facilitating
    networking, and managing files.
    The first computers did not have operating systems. However, software tools for managing
    the system and simplifying the use of hardware appeared very quickly afterwards, and gradually
    expanded in scope. By the early 1960s, commercial computer vendors were supplying quite
    extensive tools for streamlining the development, scheduling, and execution of jobs on batch
    processing systems. Examples were produced by UNIVAC and Control Data Corporation, amongst
    Through the 1960s, several major concepts were developed, driving the development of
    operating systems. The development of the IBM System/360 produced a family of mainframe
    computers available in widely differing capacities and price points, for which a single operating
    system OS/360 was planned (rather than developing ad-hoc programs for every individual model).
    This concept of a single OS spanning an entire product line was crucial for the success of
    System/360 and, in fact, IBM's current mainframe operating systems are distant descendants of this
    original system; applications written for the OS/360 can still be run on modern machines. OS/360
    also contained another important advance: the development of the hard disk permanent storage
    device (which IBM called DASD). Another key development was the concept of time-sharing: the
    idea of sharing the resources of expensive computers amongst multiple computer users interacting
    in real time with the system. Time sharing allowed all of the users to have the illusion of having
    exclusive access to the machine; the Multics timesharing system was the most famous of a number
    of new operating systems developed to take advantage of the concept.
    Multics, particularly, was an inspiration to a number of operating systems developed in the
    1970s, notably Unix. Another commercially-popular minicomputer operating system was VMS.
    The first microcomputers did not have the capacity or need for the elaborate operating systems that
    had been developed for mainframes and minis; minimalistic operating systems were developed.
    One notable early operating system was CP/M, which was supported on many early
    microcomputers and was largely cloned in creating MS-DOS, which became wildly popular as the
    operating system chosen for the IBM PC (IBM's version of it was called IBM-DOS or PC-DOS), its
    successors making Microsoft one of the world's most profitable companies. The major alternative
    throughout the 1980s in the microcomputer market was Mac OS, tied intimately to the Apple
    Macintosh computer.
    By the 1990s, the microcomputer had evolved to the point where, as well as extensive GUI
    facilities, the robustness and flexibility of operating systems of larger computers became
    increasingly desirable. Microsoft's response to this change was the development of Windows NT,
    which served as the basis for Microsoft's entire operating system line starting in 1999. Apple rebuilt
    their operating system on top of a Unix core as Mac OS X, released in 2001. Hobbyist-developed
    reimplementations of Unix, assembled with the tools from the GNU project, also became popular;
    versions based on the Linux kernel are by far the most popular, with the BSD derived UNIXes
    holding a small portion of the server market.
    The growing complexity of embedded devices has a growing trend to use embedded
    operating systems on them.

    Saturday, December 17, 2011

    Netstat, Telnet and tracert

    Netstat :- It displays protocol statistics and current TCP/IP network connections. i.e. local address, remote address, port number, etc.
    It's syntax is (at command prompt)--
    c:/>netstat -n

    Telnet :- Telnet is a program which runs on TCP/IP. Using it we can connect to the remote computer on particular port. When connected it grabs the daemon running on that port.
    The basic syntax of Telnet is (at command prompt)--

    By default telnet connects to port 23 of remote computer.
    So, the complete syntax is-
    c:/>telnet port

    example:- c:/>telnet 21 or c:/>telnet 21

    Tracert :- It is used to trace out the route taken by the certain information i.e. data packets from source to destination.
    It's syntax is (at command prompt)--
    example:- c:/>tracert
    Here "*    *    *    Request timed out." indicates that firewall installed on that system block the request and hence we can't obtain it's IP address.

    various attributes used with tracert command and their usage can be viewed by just typing c:/>tracert at the command prompt.

    The information obtained by using tracert command can be further used to find out exact operating system running on target system.

    Ping Command for Network hacking

    Network Hacking is generally means gathering information about domain by using tools like Telnet, NslookUp, Ping, Tracert, Netstat, etc.
    It also includes OS Fingerprinting, Port Scaning and Port Surfing using various tools.

    Ping :- Ping is part of ICMP (Internet Control Message Protocol) which is used to troubleshoot TCP/IP networks. So, Ping is basically a command that allows you to check whether the host is alive or not.
    To ping a particular host the syntax is (at command prompt)--

    example:- c:/>ping

    Thursday, December 15, 2011



    • The application layer is the OSI layer that is closest to the user.
    • It provides network services to the user’s applications.
    • It differs from the other layers in that it does not provide services to any
    other OSI layer, but rather, only to applications outside the OSI model.
    • Examples of such applications are spreadsheet programs, word processing
    programs, and bank terminal programs.
    • The application layer establishes the availability of intended
    communication partners, synchronizes and establishes agreement on
    procedures for error recovery and control of data integrity.

    • The presentation layer ensures that the information that the application
    layer of one system sends out is readable by the application layer of
    another system.
    • If necessary, the presentation layer translates between multiple data
    formats by using a common format.
    • Provides encryption and compression of data.
    • Examples :- JPEG, MPEG, ASCII, EBCDIC, HTML

    • The session layer defines how to start, control and end conversations (called
    sessions) between applications.
    • This includes the control and management of multiple bi-directional messages
    using dialogue control.
    • It also synchronizes dialogue between two hosts' presentation layers and
    manages their data exchange.
    • The session layer offers provisions for efficient data transfer.
    • Examples :- SQL, ASP(AppleTalk Session Protocol).

    • The transport layer regulates information flow to ensure end-to-end
    connectivity between host applications reliably and accurately.
    • The transport layer segments data from the sending host's system and
    reassembles the data into a data stream on the receiving host's system.
    • The boundary between the transport layer and the session layer can be
    thought of as the boundary between application protocols and data-flow
    protocols. Whereas the application, presentation, and session layers are
    concerned with application issues, the lower four layers are concerned
    with data transport issues.
    • Layer 4 protocols include TCP (Transmission Control Protocol) and UDP
    (User Datagram Protocol

    • Defines end-to-end delivery of packets.
    • Defines logical addressing so that any endpoint can be identified.
    • Defines how routing works and how routes are learned so that the
    packets can be delivered.
    • The network layer also defines how to fragment a packet into smaller
    packets to accommodate different media.
    • Routers operate at Layer 3.
    • Examples :- IP, IPX, AppleTalk

    • The data link layer provides access to the networking media and physical
    transmission across the media and this enables the data to locate its
    intended destination on a network.
    • The data link layer provides reliable transit of data across a physical link
    by using the Media Access Control (MAC) addresses.
    • The data link layer uses the MAC address to define a hardware or data
    link address in order for multiple stations to share the same medium and
    still uniquely identify each other.
    • Concerned with network topology, network access, error notification,
    ordered delivery of frames, and flow control.
    • Examples :- Ethernet, Frame Relay, FDDI.

    • The physical layer deals with the physical characteristics of the
    transmission medium.
    • It defines the electrical, mechanical, procedural, and functional
    specifications for activating, maintaining, and deactivating the physical
    link between end systems.
    • Such characteristics as voltage levels, timing of voltage changes, physical
    data rates, maximum transmission distances, physical connectors, and
    other similar attributes are defined by physical layer specifications.
    • Examples :- EIA/TIA-232, RJ45, NRZ.



    Over the past couple of decades many of the networks that were built used
    different hardware and software implementations, as a result they were
    incompatible and it became difficult for networks using different
    specifications to communicate with each other.
    • To address the problem of networks being incompatible and unable to
    communicate with each other, the International Organisation for
    Standardisation (ISO) researched various network schemes.
    • The ISO recognised there was a need to create a NETWORK MODEL
    that would help vendors create interoperable network implementations

    Wednesday, December 14, 2011