Discussion:
database is getting slower - compact files?
Seth Pomeroy - Bus. 717-591-5555
2016-02-22 14:43:57 UTC
Permalink
good morning experts...

i'm sure this topic has been answered before but cannot find any
information.

my solution is getting slower: slow rendering of graphics, slow in
finding related records

i do not believe anything is corrupt, just tons and tons of graphics,
files, graphics, calculations, etc

should i "save copies" of the databases as "compacted files" then use
these newly created database versus the old?

are there other alternatives?

should i do this on a timed basis? monthly? yearly?

running FMP Adv 13 on windows - oh, peer-to-peer with a total of 5 clients

always appreciate everyone's help

thanks
seth
Damian Kelly
2016-02-22 14:46:24 UTC
Permalink
Files can get slower as a result of architectural issues. For example a stock calc thats adds the ins and takes away the outs will get slower as more records are added.

Damian
Post by Seth Pomeroy - Bus. 717-591-5555
good morning experts...
i'm sure this topic has been answered before but cannot find any information.
my solution is getting slower: slow rendering of graphics, slow in finding related records
i do not believe anything is corrupt, just tons and tons of graphics, files, graphics, calculations, etc
should i "save copies" of the databases as "compacted files" then use these newly created database versus the old?
are there other alternatives?
should i do this on a timed basis? monthly? yearly?
running FMP Adv 13 on windows - oh, peer-to-peer with a total of 5 clients
always appreciate everyone's help
thanks
seth
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Seth Pomeroy - Bus. 717-591-5555
2016-02-22 14:58:40 UTC
Permalink
thanks damian.... yep, tons and tons of calculations that do the below
but i feel that i might have "gaps" in the databases - wondering if i
should create and use "compacted files"
Post by Damian Kelly
Files can get slower as a result of architectural issues. For example a stock calc thats adds the ins and takes away the outs will get slower as more records are added.
Damian
Post by Seth Pomeroy - Bus. 717-591-5555
good morning experts...
i'm sure this topic has been answered before but cannot find any information.
my solution is getting slower: slow rendering of graphics, slow in finding related records
i do not believe anything is corrupt, just tons and tons of graphics, files, graphics, calculations, etc
should i "save copies" of the databases as "compacted files" then use these newly created database versus the old?
are there other alternatives?
should i do this on a timed basis? monthly? yearly?
running FMP Adv 13 on windows - oh, peer-to-peer with a total of 5 clients
always appreciate everyone's help
thanks
seth
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Tom Langton VZ
2016-02-22 14:57:17 UTC
Permalink
“Peer-to-peer,” and “tons and tons of graphics, files, graphics” probably has a lot to do with the slowdown.

Get FM Server and store graphics and attached files “externallly,” as references, not internally in the FM fie.

Compacting the files, my understanding at least, is that the compacting process rids the file of the vacant space left by record deletions. So, if you’ve moved lots of files in and out of the database, that might help, but I think tons of graphics hosts peer to peer is probably your issue.

good luck.

tom
Post by Seth Pomeroy - Bus. 717-591-5555
good morning experts...
i'm sure this topic has been answered before but cannot find any information.
my solution is getting slower: slow rendering of graphics, slow in finding related records
i do not believe anything is corrupt, just tons and tons of graphics, files, graphics, calculations, etc
should i "save copies" of the databases as "compacted files" then use these newly created database versus the old?
are there other alternatives?
should i do this on a timed basis? monthly? yearly?
running FMP Adv 13 on windows - oh, peer-to-peer with a total of 5 clients
always appreciate everyone's help
thanks
seth
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Seth Pomeroy - Bus. 717-591-5555
2016-02-22 15:01:00 UTC
Permalink
thanks tom.... yep, this is good advise and FM Server will come at some
point (just not now) - yes, i've added new records and deleted old records
Post by Tom Langton VZ
“Peer-to-peer,” and “tons and tons of graphics, files, graphics” probably has a lot to do with the slowdown.
Get FM Server and store graphics and attached files “externallly,” as references, not internally in the FM fie.
Compacting the files, my understanding at least, is that the compacting process rids the file of the vacant space left by record deletions. So, if you’ve moved lots of files in and out of the database, that might help, but I think tons of graphics hosts peer to peer is probably your issue.
good luck.
tom
Post by Seth Pomeroy - Bus. 717-591-5555
good morning experts...
i'm sure this topic has been answered before but cannot find any information.
my solution is getting slower: slow rendering of graphics, slow in finding related records
i do not believe anything is corrupt, just tons and tons of graphics, files, graphics, calculations, etc
should i "save copies" of the databases as "compacted files" then use these newly created database versus the old?
are there other alternatives?
should i do this on a timed basis? monthly? yearly?
running FMP Adv 13 on windows - oh, peer-to-peer with a total of 5 clients
always appreciate everyone's help
thanks
seth
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
--
Jonathan Fletcher
2016-02-22 14:59:36 UTC
Permalink
Seth,

Putting it on FM Server will give you a nice speed boost, not to mention the automatic backups and file security advantages.

P2P is considered by many (myself included) as a recipe for disaster from which there is no recovery. I personally don’t ever recommend it, even for that first extra FMP client.

We no longer have our Winfried Huslik to fix our file corruption mistakes, so we’re on our own now. That means we have to be extra careful.

P2P is not being extra careful.

I know you didn’t want to hear that. Such is often the way of the truth.

—johnno
Post by Seth Pomeroy - Bus. 717-591-5555
running FMP Adv 13 on windows - oh, peer-to-peer with a total of 5 clients
--
Jonathan Fletcher
***@fletcherdata.com

Kentuckiana FileMaker Developers Group
Next Meeting: 2/23/16
Seth Pomeroy - Bus. 717-591-5555
2016-02-22 15:04:40 UTC
Permalink
thanks jonathan.....

this is good advice - i appreciate it - maybe i just need to move to
server sooner than later - thnks
seth
Post by Jonathan Fletcher
Seth,
Putting it on FM Server will give you a nice speed boost, not to mention the automatic backups and file security advantages.
P2P is considered by many (myself included) as a recipe for disaster from which there is no recovery. I personally don’t ever recommend it, even for that first extra FMP client.
We no longer have our Winfried Huslik to fix our file corruption mistakes, so we’re on our own now. That means we have to be extra careful.
P2P is not being extra careful.
I know you didn’t want to hear that. Such is often the way of the truth.
—johnno
Post by Seth Pomeroy - Bus. 717-591-5555
running FMP Adv 13 on windows - oh, peer-to-peer with a total of 5 clients
--
Jonathan Fletcher
Kentuckiana FileMaker Developers Group
Next Meeting: 2/23/16
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Riley Waugh
2016-02-22 16:05:02 UTC
Permalink
Deleting records does leave “gaps” in FileMaker. So if you are not going to go to server soon, you might want to save a clone and then import all your tables into it as a way of removing all the “ghost” records let from deleting records.

It may be that saving a compacted file removes the ghosts… I am not sure.

Riley Waugh
Post by Seth Pomeroy - Bus. 717-591-5555
thanks jonathan.....
this is good advice - i appreciate it - maybe i just need to move to server sooner than later - thnks
seth
Post by Jonathan Fletcher
Seth,
Putting it on FM Server will give you a nice speed boost, not to mention the automatic backups and file security advantages.
P2P is considered by many (myself included) as a recipe for disaster from which there is no recovery. I personally don’t ever recommend it, even for that first extra FMP client.
We no longer have our Winfried Huslik to fix our file corruption mistakes, so we’re on our own now. That means we have to be extra careful.
P2P is not being extra careful.
I know you didn’t want to hear that. Such is often the way of the truth.
—johnno
Post by Seth Pomeroy - Bus. 717-591-5555
running FMP Adv 13 on windows - oh, peer-to-peer with a total of 5 clients
--
Jonathan Fletcher
Kentuckiana FileMaker Developers Group
Next Meeting: 2/23/16
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Seth Pomeroy - Bus. 717-591-5555
2016-02-22 16:10:54 UTC
Permalink
ok thnks riley
seth
Post by Riley Waugh
Deleting records does leave “gaps” in FileMaker. So if you are not going to go to server soon, you might want to save a clone and then import all your tables into it as a way of removing all the “ghost” records let from deleting records.
It may be that saving a compacted file removes the ghosts… I am not sure.
Riley Waugh
Post by Seth Pomeroy - Bus. 717-591-5555
thanks jonathan.....
this is good advice - i appreciate it - maybe i just need to move to server sooner than later - thnks
seth
Post by Jonathan Fletcher
Seth,
Putting it on FM Server will give you a nice speed boost, not to mention the automatic backups and file security advantages.
P2P is considered by many (myself included) as a recipe for disaster from which there is no recovery. I personally don’t ever recommend it, even for that first extra FMP client.
We no longer have our Winfried Huslik to fix our file corruption mistakes, so we’re on our own now. That means we have to be extra careful.
P2P is not being extra careful.
I know you didn’t want to hear that. Such is often the way of the truth.
—johnno
Post by Seth Pomeroy - Bus. 717-591-5555
running FMP Adv 13 on windows - oh, peer-to-peer with a total of 5 clients
--
Jonathan Fletcher
Kentuckiana FileMaker Developers Group
Christopher Bailey
2016-02-22 16:38:15 UTC
Permalink
In my experience, compacting files has never sped up performance. I'd
actually be kind of alarmed if it did (I like the fact that one can create
hundreds or thousands of temp records, delete them, and performance is
fine.). Do compact for space reasons . . .but expect no performance
improvements from the operation.

I don't think compacting will remove ghost records, you need to do the
cloning thing for that purpose. But I kind of doubt that is your main
problem. You need to be on server, as others have insisted.

Chris
Message: 9
Date: Mon, 22 Feb 2016 11:05:02 -0500
Deleting records does leave ?gaps? in FileMaker. So if you are not going
to go to > server soon, you might want to save a clone and then import all
your tables into it > as a way of removing all the ?ghost? records let from
deleting records.
It may be that saving a compacted file removes the ghosts? I am not sure.
Seth Pomeroy - Bus. 717-591-5555
2016-02-22 16:41:19 UTC
Permalink
got it, thanks for all the advise, chris
seth
Post by Christopher Bailey
In my experience, compacting files has never sped up performance. I'd
actually be kind of alarmed if it did (I like the fact that one can create
hundreds or thousands of temp records, delete them, and performance is
fine.). Do compact for space reasons . . .but expect no performance
improvements from the operation.
I don't think compacting will remove ghost records, you need to do the
cloning thing for that purpose. But I kind of doubt that is your main
problem. You need to be on server, as others have insisted.
Chris
Message: 9
Date: Mon, 22 Feb 2016 11:05:02 -0500
Deleting records does leave ?gaps? in FileMaker. So if you are not going
to go to > server soon, you might want to save a clone and then import all
your tables into it > as a way of removing all the ?ghost? records let from
deleting records.
It may be that saving a compacted file removes the ghosts? I am not sure.
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Stefan Schütt
2016-02-22 16:52:02 UTC
Permalink
Hi Chris,

I must say my experience differ somewhat from that.

In many cases, compacting does a good job of making a solution faster. Especially if you have deleted lots of records and created new ones.

One of the reasons for this is that, on a traditional hard drive, a file might get fragmented as times go by. And it takes up more space on the drive as deleting records will not get rid of the ”dead” space.

When you do a compact, the record is rebuilt from scratch and all the data will be ”together” and the file size will get smaller.

The difference in performance is not as notable with SSD drives as with traditional hard drives, but it is there.

I have one client, whose data file is some almost 4 Gb. The file grows to some 5-6 Gb during the year and about once a year I do a compacted copy. Which will bring the file size down to some 3.5 - 4 Gb.

And the customer always says, that the solution feels a little quicker. And they are running FileMaker Server on a Mac Pro (present version) with SSD drive.

__
Stefan Schutt, Mouse Up, Finland
Post by Christopher Bailey
In my experience, compacting files has never sped up performance. I'd
actually be kind of alarmed if it did (I like the fact that one can create
hundreds or thousands of temp records, delete them, and performance is
fine.). Do compact for space reasons . . .but expect no performance
improvements from the operation.
I don't think compacting will remove ghost records, you need to do the
cloning thing for that purpose. But I kind of doubt that is your main
problem. You need to be on server, as others have insisted.
Damian Kelly
2016-02-22 16:53:55 UTC
Permalink
Generally in my experience if its slow, you built it wrong.

Damian
Post by Stefan Schütt
Hi Chris,
I must say my experience differ somewhat from that.
In many cases, compacting does a good job of making a solution faster. Especially if you have deleted lots of records and created new ones.
One of the reasons for this is that, on a traditional hard drive, a file might get fragmented as times go by. And it takes up more space on the drive as deleting records will not get rid of the ”dead” space.
When you do a compact, the record is rebuilt from scratch and all the data will be ”together” and the file size will get smaller.
The difference in performance is not as notable with SSD drives as with traditional hard drives, but it is there.
I have one client, whose data file is some almost 4 Gb. The file grows to some 5-6 Gb during the year and about once a year I do a compacted copy. Which will bring the file size down to some 3.5 - 4 Gb.
And the customer always says, that the solution feels a little quicker. And they are running FileMaker Server on a Mac Pro (present version) with SSD drive.
__
Stefan Schutt, Mouse Up, Finland
Post by Christopher Bailey
In my experience, compacting files has never sped up performance. I'd
actually be kind of alarmed if it did (I like the fact that one can create
hundreds or thousands of temp records, delete them, and performance is
fine.). Do compact for space reasons . . .but expect no performance
improvements from the operation.
I don't think compacting will remove ghost records, you need to do the
cloning thing for that purpose. But I kind of doubt that is your main
problem. You need to be on server, as others have insisted.
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Damian Kelly
2016-02-22 16:57:18 UTC
Permalink
Not to say I haven’t built a lot of things slow, and thus wrong, in my time :-)

Damian
Post by Damian Kelly
Generally in my experience if its slow, you built it wrong.
Damian
Post by Stefan Schütt
Hi Chris,
I must say my experience differ somewhat from that.
In many cases, compacting does a good job of making a solution faster. Especially if you have deleted lots of records and created new ones.
One of the reasons for this is that, on a traditional hard drive, a file might get fragmented as times go by. And it takes up more space on the drive as deleting records will not get rid of the ”dead” space.
When you do a compact, the record is rebuilt from scratch and all the data will be ”together” and the file size will get smaller.
The difference in performance is not as notable with SSD drives as with traditional hard drives, but it is there.
I have one client, whose data file is some almost 4 Gb. The file grows to some 5-6 Gb during the year and about once a year I do a compacted copy. Which will bring the file size down to some 3.5 - 4 Gb.
And the customer always says, that the solution feels a little quicker. And they are running FileMaker Server on a Mac Pro (present version) with SSD drive.
__
Stefan Schutt, Mouse Up, Finland
Post by Christopher Bailey
In my experience, compacting files has never sped up performance. I'd
actually be kind of alarmed if it did (I like the fact that one can create
hundreds or thousands of temp records, delete them, and performance is
fine.). Do compact for space reasons . . .but expect no performance
improvements from the operation.
I don't think compacting will remove ghost records, you need to do the
cloning thing for that purpose. But I kind of doubt that is your main
problem. You need to be on server, as others have insisted.
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Seth Pomeroy - Bus. 717-591-5555
2016-02-22 16:58:30 UTC
Permalink
thanks for this info as well, stefan....

i have a 1 Tera SSD drive with 32 gigs of ram

my FM solution is slightly less than 10 gigs

seth
Post by Stefan Schütt
Hi Chris,
I must say my experience differ somewhat from that.
In many cases, compacting does a good job of making a solution faster. Especially if you have deleted lots of records and created new ones.
One of the reasons for this is that, on a traditional hard drive, a file might get fragmented as times go by. And it takes up more space on the drive as deleting records will not get rid of the ”dead” space.
When you do a compact, the record is rebuilt from scratch and all the data will be ”together” and the file size will get smaller.
The difference in performance is not as notable with SSD drives as with traditional hard drives, but it is there.
I have one client, whose data file is some almost 4 Gb. The file grows to some 5-6 Gb during the year and about once a year I do a compacted copy. Which will bring the file size down to some 3.5 - 4 Gb.
And the customer always says, that the solution feels a little quicker. And they are running FileMaker Server on a Mac Pro (present version) with SSD drive.
__
Stefan Schutt, Mouse Up, Finland
Post by Christopher Bailey
In my experience, compacting files has never sped up performance. I'd
actually be kind of alarmed if it did (I like the fact that one can create
hundreds or thousands of temp records, delete them, and performance is
fine.). Do compact for space reasons . . .but expect no performance
improvements from the operation.
I don't think compacting will remove ghost records, you need to do the
cloning thing for that purpose. But I kind of doubt that is your main
problem. You need to be on server, as others have insisted.
Tim Ballering
2016-02-22 16:58:40 UTC
Permalink
Compacting or cloning and importing might show a small speed bump, speed if the file is old and tattered. However if the speed issue is large enough to write about I doubt those options alone will fix it unless your index is bad.

I would suspect, as others have, that unstored calcs are to blame. As the file grows so does the amount of processing these calculations require. I would consider an architecture change such as setting a numeric field by script to replace the unstored calcs.



Tim Ballering
Post by Seth Pomeroy - Bus. 717-591-5555
good morning experts...
i'm sure this topic has been answered before but cannot find any information.
my solution is getting slower: slow rendering of graphics, slow in finding related records
i do not believe anything is corrupt, just tons and tons of graphics, files, graphics, calculations, etc
should i "save copies" of the databases as "compacted files" then use these newly created database versus the old?
are there other alternatives?
should i do this on a timed basis? monthly? yearly?
running FMP Adv 13 on windows - oh, peer-to-peer with a total of 5 clients
always appreciate everyone's help
thanks
seth
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Damian Kelly
2016-02-22 17:01:56 UTC
Permalink
That was exactly the route we took initially. We created a script that took unstored calcs and placed them into fields over night. That got me some respite and we are now almost finished a rewrite with no unstored calcs on which people search or report. Its a little trigger happy and the debugging has been a bitch but finding stock information in our stock list of 100,000 SKUs is super snappy.

Damian
Post by Tim Ballering
Compacting or cloning and importing might show a small speed bump, speed if the file is old and tattered. However if the speed issue is large enough to write about I doubt those options alone will fix it unless your index is bad.
I would suspect, as others have, that unstored calcs are to blame. As the file grows so does the amount of processing these calculations require. I would consider an architecture change such as setting a numeric field by script to replace the unstored calcs.
Tim Ballering
Post by Seth Pomeroy - Bus. 717-591-5555
good morning experts...
i'm sure this topic has been answered before but cannot find any information.
my solution is getting slower: slow rendering of graphics, slow in finding related records
i do not believe anything is corrupt, just tons and tons of graphics, files, graphics, calculations, etc
should i "save copies" of the databases as "compacted files" then use these newly created database versus the old?
are there other alternatives?
should i do this on a timed basis? monthly? yearly?
running FMP Adv 13 on windows - oh, peer-to-peer with a total of 5 clients
always appreciate everyone's help
thanks
seth
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Seth Pomeroy - Bus. 717-591-5555
2016-02-22 17:06:51 UTC
Permalink
very interesting, thanks - i haven't been storing many of the
calculations and; of course, the light bulb just went on - i suppose
that is probably part of the problem (if not all) - should all
calculations be stored? minimaL?
thnks
seth
Post by Tim Ballering
Compacting or cloning and importing might show a small speed bump, speed if the file is old and tattered. However if the speed issue is large enough to write about I doubt those options alone will fix it unless your index is bad.
I would suspect, as others have, that unstored calcs are to blame. As the file grows so does the amount of processing these calculations require. I would consider an architecture change such as setting a numeric field by script to replace the unstored calcs.
Tim Ballering
Post by Seth Pomeroy - Bus. 717-591-5555
good morning experts...
i'm sure this topic has been answered before but cannot find any information.
my solution is getting slower: slow rendering of graphics, slow in finding related records
i do not believe anything is corrupt, just tons and tons of graphics, files, graphics, calculations, etc
should i "save copies" of the databases as "compacted files" then use these newly created database versus the old?
are there other alternatives?
should i do this on a timed basis? monthly? yearly?
running FMP Adv 13 on windows - oh, peer-to-peer with a total of 5 clients
always appreciate everyone's help
thanks
seth
Damian Kelly
2016-02-22 17:18:58 UTC
Permalink
its up to you!

unstored calcs will always be spot on and are very low maintenance, but can be slow. If they are not slow then they are very slow.

stored calcs are indexable but are only available if you only reference fields in the same table as the calc, and don’t reference globals. Off the top of my head I would say a calc that calls a function like get(currentdate) is also unindexable, but thats a punt.

you can probably get funky with auto enter calcs too

using scripts to set fields opens a whole host of issues like record locking and capturing all the events that lead to a value changing. However the fields are then indexable.


Damian
very interesting, thanks - i haven't been storing many of the calculations and; of course, the light bulb just went on - i suppose that is probably part of the problem (if not all) - should all calculations be stored? minimaL?
thnks
seth
Post by Tim Ballering
Compacting or cloning and importing might show a small speed bump, speed if the file is old and tattered. However if the speed issue is large enough to write about I doubt those options alone will fix it unless your index is bad.
I would suspect, as others have, that unstored calcs are to blame. As the file grows so does the amount of processing these calculations require. I would consider an architecture change such as setting a numeric field by script to replace the unstored calcs.
Tim Ballering
Post by Seth Pomeroy - Bus. 717-591-5555
good morning experts...
i'm sure this topic has been answered before but cannot find any information.
my solution is getting slower: slow rendering of graphics, slow in finding related records
i do not believe anything is corrupt, just tons and tons of graphics, files, graphics, calculations, etc
should i "save copies" of the databases as "compacted files" then use these newly created database versus the old?
are there other alternatives?
should i do this on a timed basis? monthly? yearly?
running FMP Adv 13 on windows - oh, peer-to-peer with a total of 5 clients
always appreciate everyone's help
thanks
seth
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Seth Pomeroy - Bus. 717-591-5555
2016-02-22 17:22:34 UTC
Permalink
understand and thanks again to everyone and for all the tips - thkns
seth
Post by Damian Kelly
its up to you!
unstored calcs will always be spot on and are very low maintenance, but can be slow. If they are not slow then they are very slow.
stored calcs are indexable but are only available if you only reference fields in the same table as the calc, and don’t reference globals. Off the top of my head I would say a calc that calls a function like get(currentdate) is also unindexable, but thats a punt.
you can probably get funky with auto enter calcs too
using scripts to set fields opens a whole host of issues like record locking and capturing all the events that lead to a value changing. However the fields are then indexable.
Damian
very interesting, thanks - i haven't been storing many of the calculations and; of course, the light bulb just went on - i suppose that is probably part of the problem (if not all) - should all calculations be stored? minimaL?
thnks
seth
Post by Tim Ballering
Compacting or cloning and importing might show a small speed bump, speed if the file is old and tattered. However if the speed issue is large enough to write about I doubt those options alone will fix it unless your index is bad.
I would suspect, as others have, that unstored calcs are to blame. As the file grows so does the amount of processing these calculations require. I would consider an architecture change such as setting a numeric field by script to replace the unstored calcs.
Tim Ballering
Post by Seth Pomeroy - Bus. 717-591-5555
good morning experts...
i'm sure this topic has been answered before but cannot find any information.
my solution is getting slower: slow rendering of graphics, slow in finding related records
i do not believe anything is corrupt, just tons and tons of graphics, files, graphics, calculations, etc
should i "save copies" of the databases as "compacted files" then use these newly created database versus the old?
are there other alternatives?
should i do this on a timed basis? monthly? yearly?
running FMP Adv 13 on windows - oh, peer-to-peer with a total of 5 clients
always appreciate everyone's help
thanks
seth
_____________
Mark Rubenstein
2016-02-22 17:35:38 UTC
Permalink
The first part is correct; calc fields that reference related tables or global fields cannot be indexed.
However, a calc field that references a function like get ( currentDate ) can either be stored (indexed) or unstored.

For example, if you create a stored calc field = get ( currentDate ), the date will populate the field upon record creation and not change over time (probably not what you want).
However, if you make that same calc field unstored, it will always contain the current date.

Mark
-------------------------------------------------------------------------------------------------------------------------------------
Mark Rubenstein
Post by Damian Kelly
stored calcs are indexable but are only available if you only reference fields in the same table as the calc, and don’t reference globals. Off the top of my head I would say a calc that calls a function like get(currentdate) is also unindexable, but thats a punt.
Seth Pomeroy - Bus. 717-591-5555
2016-02-22 17:43:04 UTC
Permalink
got it, thanks - yes, i use get (current date) and need it to
re-calculate all the time
seth
Post by Mark Rubenstein
The first part is correct; calc fields that reference related tables or global fields cannot be indexed.
However, a calc field that references a function like get ( currentDate ) can either be stored (indexed) or unstored.
For example, if you create a stored calc field = get ( currentDate ), the date will populate the field upon record creation and not change over time (probably not what you want).
However, if you make that same calc field unstored, it will always contain the current date.
Mark
-------------------------------------------------------------------------------------------------------------------------------------
Mark Rubenstein
Post by Damian Kelly
stored calcs are indexable but are only available if you only reference fields in the same table as the calc, and don’t reference globals. Off the top of my head I would say a calc that calls a function like get(currentdate) is also unindexable, but thats a punt.
_______________________________________________
FMPexperts mailing list
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
--
Jonathan Fletcher
2016-02-22 17:44:19 UTC
Permalink
Technically, it will not always “contain” the current date, but will evaluate to the current date when you access it, such as going to a layout where it is displayed, of attempting to use a calculation that then uses it.

As may be imagined, this can create a performance hit at times. If you need today’s date a lot, or have a bunch of calculations that depends on it, it can be way more efficient to set it to a stored value every day. That can be done with a scheduled server script, the first person in in the morning’s startup script, or even everyone’s startup script that just sets the day’s date.

—johnno
Post by Mark Rubenstein
For example, if you create a stored calc field = get ( currentDate ), the date will populate the field upon record creation and not change over time (probably not what you want).
However, if you make that same calc field unstored, it will always contain the current date.
--
Jonathan Fletcher
***@fletcherdata.com

Kentuckiana FileMaker Developers Group
Next Meeting: 2/23/16
Seth Pomeroy - Bus. 717-591-5555
2016-02-22 17:45:37 UTC
Permalink
good thoughts, thank you
seth
Post by Jonathan Fletcher
Technically, it will not always “contain” the current date, but will evaluate to the current date when you access it, such as going to a layout where it is displayed, of attempting to use a calculation that then uses it.
As may be imagined, this can create a performance hit at times. If you need today’s date a lot, or have a bunch of calculations that depends on it, it can be way more efficient to set it to a stored value every day. That can be done with a scheduled server script, the first person in in the morning’s startup script, or even everyone’s startup script that just sets the day’s date.
—johnno
Post by Mark Rubenstein
For example, if you create a stored calc field = get ( currentDate ), the date will populate the field upon record creation and not change over time (probably not what you want).
However, if you make that same calc field unstored, it will always contain the current date.
--
Jonathan Fletcher
Kentuckiana FileMaker Developers Group
Next Meeting: 2/23/16
_______________________________
Loading...