Home > linux > Are there any issues of have huge files in HD in a 32 bit system?

Are there any issues of have huge files in HD in a 32 bit system?

November 16Hits:1
Advertisement

I know that in 32 bit systems the largest memory we can have is 4GB (2^32). But I am not clear what is the implication of this in terms of files.
I think that we can have files of arbitrary size in our HDs right? A lot more than 4GB. So are there any caveats in 32 bit systems and large files?
I assume that certain 32-bit programs would not be able to load files of more than 4GB or am I wrong?

Answers

It only matters if you have an application that tries to load the entire file in memory.
A programmer that does that for such large files should be shot. There are better ways.

Some software might burp on very large files (large meaning > 2 Gigabyte), but such software will usually do that on 64-bit systems too.
In most cases it is due to the programmer having the software designed for and tested with smaller files. The software contains logic errors preventing it working properly with very large files. It is not a limitation of the OS itself.
(Very common example is using a signed 32 number to keep track of the position in the file, which gives issues at the 2 GB boundary.)

In case of your example video: Only a small part (the part that is actually playing and a number of additional seconds of buffering) is typically loaded in memory. Usually not more than 2-3 Megabytes at a time.

As for arbitrary size files on a harddisk: That is not true.
Every filesystem has limits on the maximum size of any single file.
E.g in case of Fat32 that limit is 4 GB per file. NTFS has a limit of 16 TB. The Linux filesystem ext3 has a 16GB, 256GB or 2TB limit depending on whether the filesystem uses 1K, 2K or 4K blocks.

I know that in 32 bit systems the largest memory we can have is 4GB (2^32).

This is wrong; it is perfectly possible for a 32-bit CPU to use more than 4 GiB RAM, just like it's possible for a 16-bit CPU to use more than 64 KiB RAM. Recall that the 16-bit 80286 could address 16 MiB through its 24-bit address bus (at the time this was considered a huge amount of memory; the 80286 became available in 1982, and 1983 saw the introduction of the first 3.5" hard disk, sporting a 10 MB storage capacity; the IBM PC AT, which was designed around the Intel 80286, came with a minimum of 256 KiB of RAM), and the 1979-vintage Intel 8086 had an address space of 1 MiB (and provided the computational capacity of the original IBM 5150 PC which could be upgraded to the same 256 KiB amount of RAM, well beyond the native limit of a 16-bit address of 64 KiB). Look up techniques such as Physical Address Extension, bank switching (which, though requiring care on the part of the programmer, was common in early PCs and earlier electronic computers due to its relative implementation simplicity; the Apollo Guidance Computer was a bank-switched design) and segmented memory models such as the x86 memory segmentation model.

The ultimately limiting factor for how much memory can be addressed without relying on such techniques is the width of the CPU's native address bus, which is independent from the CPU's native word width, or bitness as normally referred to. It would be perfectly possible to make a CPU that works with data in 64-bit chunks (which would make it a 64-bit CPU) even though it has a 16-bit address bus; I can't see any real application for something like that, but it isn't technically a contradiction.

Now, lots of people don't bother with these techniques on 32-bit CPUs because around the time that they were common in PCs, 4 GiB was really all you needed, and 32-bit CPUs generally had wide enough address buses to not make this a concern; even the reduced-capability 80386SX had a 24-bit usable address bus, allowing for 16 MiB of address space in 1988 when the same year saw the introduction of a 20 MB hard disk setup. Not needing to concern yourself with segmentation, PAE and similar techniques makes life a lot easier on the programmer. 32-bit server software, however, was commonly written to handle more than 4 GiB of RAM.

And of course, for perspective, 16-bit software regularly worked with files larger than 65,536 bytes. It takes a little thinking if you want your software to natively work with files that are too large to fit into a singly-allocated block of memory, but it definitely isn't impossible.

But I am not clear what is the implication of this in terms of files. I think that we can have files of arbitrary size in our HDs right? A lot more than 4GB.

No, you cannot have arbitrarily large files, even if constrained by available physical storage space: at the lowest logical level, the file system puts limits on how large files can be stored, simply because it needs to be able to store the size of the file somewhere. The exact limit varies with the file system and sometimes with settings. With modern file systems such as NTFS, ext4, and so on, the limits are high enough that you are unlikely to hit them with a single disk, although it may be a concern if you have a large storage array. For example, NTFS (the file system) supports file sizes of up to 16 EiB, although the NTFS implementation in Windows is currently (artificially) limited to a maximum file size of just under 256 TiB (raised from 16 TiB by the release of Windows 8 and Windows Server 2012).

16 TiB is not an excessively large amount of storage; you can get there by running e.g. 7 disks of 4 TB each in RAID-6, which is certainly well within the financial reach even of individuals.

The same thing has been done with different editions of Windows, artificially limiting the amount of usable RAM even though the underlying architecture allowed plenty more to be used.

So are there any caveats in 32 bit systems and large files? I assume that certain 32-bit programs would not be able to load files of more than 4GB or am I wrong?

That depends on the software, and to a lesser extent how it works with its data files, so yes, if the operative words are certain 32-bit programs then your assumption is almost certainly correct. Then again, certain 64-bit programs might not deal well with huge files either. I run into this occasionally at work; for example, Microsoft Word 2010 will for me refuse to load any file that is larger than 512 MB, even though I have plenty more memory than that available if it were only to try to use it.

If the software tries to load the entire file into memory at once (which it really shouldn't) and doesn't have artificial limitations, the limiting factor with current operating systems will be the available virtual memory size. (Note: virtual memory and swap are two completely different things. You also need to consider memory overcommitting.) If on the other hand the software loads only a portion of the file into memory at any one time, as long as the OS itself provides facilities to access portions of the file beyond the 32-bit size boundary of 4 GiB and the file system can deal with the size of the file, the actual size of the file should pretty much not be a concern at all, and if it is, that is likely a userland software bug.

Related Articles

  • Are there any issues of have huge files in HD in a 32 bit system?November 16

    I know that in 32 bit systems the largest memory we can have is 4GB (2^32). But I am not clear what is the implication of this in terms of files. I think that we can have files of arbitrary size in our HDs right? A lot more than 4GB. So are there any

  • Opening huge files in notepad++June 18

    Sometime I have problems opening huge log files(700mb) or so in np++. I'm running Win2k8 R2 x64 OS. For eg, the file's Last modified time is: 16:48 but when I open the file in notepad++, the file is broken. I can see only entries till 16:18.? Why is

  • Speed of MySQL type index access vs. binary jump search on a huge file?May 25

    I am dead set on moving over to a MySQL database for some huge data sets I'm working with but right now I don't have the time. In the meantime I am curious about a technical performance issue regarding speed between the two methods. Obviously binary

  • Many huge files in Sharepoint- Any "stubbing" or "shortcutting" software exist?February 15

    I'm in an industry that uses a lot of scanned documents and large attachments (100MB), and I see our Sharepoint database growing quickly. What options do I have for managing this rapid growth? Can I treat the Sharepoint Database like the Exchange ser

  • Add lines to the beginning and end of the huge fileAugust 22

    I have the scenario where lines to be added on begining and end of the huge files. I have tried as shown below. for the first line: sed -i '1i\'"$FirstLine" $Filename for the last line: sed -i '$ a\'"$Lastline" $Filename But the issue

  • Text editor or reader for working with huge files in Windows

    Text editor or reader for working with huge files in WindowsFebruary 9

    At the office certain server log files grow way too big and trying to go through the logs can be a real pain, when for instance you have a several GB size text file and all the normal text editors load the entire file in memory, becoming really slow

  • Scanning multiple huge files in Python (follow-up)December 8

    I'm trying to implement an algorithm able to search for multiple keys through ten huge files in Python (16 million of rows each one). I've got a sorted file with 62 million of keys, and I'm trying to scan each of the ten files in the dataset to look

  • How to find the bottleneck while transferring huge files between 2 hostsJanuary 2

    We frequently need to transfer huge files (upwards of 50 GB) between two hosts, and the transfer rate never seems to reach the expected throughput for the network. There are several points which could be the bottleneck, but each of their theorical up

  • ESET nod32 antivirus creating huge files in temp folder

    ESET nod32 antivirus creating huge files in temp folderNovember 28

    I'm running ESET nod32 antivirus on Windows 7 64 bit machine. I can see huge files being created and being locked by the ESET nod32 antivirus .Why is this? --------------Solutions------------- These are temp files created by Eset when an application

  • How do I find a string (and some lines before and after) in a huge file by terminal?May 30

    I want to display three or four lines before and after a certain string in a really huge file, which I can' topen with vi. How is this done? I tryed grep -i -n -r 'mysearchstring' but this only gives me one line. I need some lines before and after to

  • Editing a huge file - Vim or something else?December 2

    I am Vim fan for most of my editing purposes. But these days when I have to open huge file ~1-2 gigs, its is vert slow to load and perform operations What are the other ways I can edit such huge files efficiently --------------Solutions-------------

  • Grep huge number of patterns from huge fileJanuary 21

    I have a file that's growing about 200,000 lines a day, and it is all formed with blocks of three lines as such: 1358726575123 # key Joseph Brunner # name carpenter # job 9973834728345 Andres Smith student 7836472098652 Mariah Anthony dentist Now, I

  • Copying huge files between two remote machines - EfficientlyApril 30

    I have a shell script which keeps on copying huge files (2 GB to 5 GB) between remote systems. Key based authentication is used with agent-forwarding and everything works. For ex: Say the shell script is running on machine-A and copying files from ma

  • Grepping over a huge file performanceMay 29

    I have FILE_A which has over 300K lines and FILE_B which has over 30M lines. I created a bash script that greps each line in FILE_A over in FILE_B and writes the result of the grep to a new file. This whole process is taking over 5+ hours. I'm lookin

  • How to transfer huge files (50GB+) from remote Windows server?July 16

    I have a dedicated Windows Server in a remote data center, where my site is running along with a Microsoft SQL Server database. The database is is 50+ GB and I want to have a local backup of the database. The problem is how to transfer a huge backup

  • Repair a huge file to which I have ssh accessAugust 18

    I have that huge file on a remote server to which I have access through ssh. I downloaded it on my computer, but for some reason some part of it was corrupted during the transfer. Downloading it again would be very time-consuming. As I have access to

  • What is the fastest way to transfer huge files beween two android powered devices?September 3

    What is the fastest way to transfer huge files (e.g. 500MB) between two android powered devices? Bluetooth? WiFi direct? Beam? or maybe USB on-the-go? --------------Solutions------------- There are many variables that affect the performance of each m

  • How to send huge files from one server to another serverOctober 1

    I need to transfer a huge file (more than 70GB) from one server in Canada to another server in Africa. I tried FTP, but always get disconnected somewhere. And I guess the network is not stable for the African server, some of the file got uploaded, bu

  • How can I enable my-huge file on windows server?October 14

    I installed Windows Server 2008 and MySQL 5.5. Now I need configure MySQL for huge InnoDB tables and a lot of connections. I see that on my Windows Server, there is a my-huge file(configs files that come with mysql :my, my-huge,my-large .ini) with co

  • Move Huge files from one server to another via SSHOctober 19

    I have been trying to move huge files from one server to another one via ssh using wget and scp commands but the tar.gz file get corrupted and wont extract. The files is over 30 to 50GBs. Is there any other better way to move .tar.gz files? ---------

Copyright (C) 2017 ceus-now.com, All Rights Reserved. webmaster#ceus-now.com 14 q. 0.968 s.