_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 21
37k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d1333
|
train
|
do you need to use some methods specific to the dictionaries ? If not, here is my suggestion :
*
*Create a class which has a string and a double properties
*Create an ObservableCollection of that class
*Set that collection as the items source of your datagrid.
And that's it ! The headers will be the the name of the properties specified in your class, so easy to change afterward.
Hope it's help
A: Can bind rows to a collection but not columns
Create a class with 200 properties like Jacques answer
For the get you would return Value[0], Value[1], ...
And it can be List
Or you could could build up the columns in code being and in the case you can bind to Value[0], Value[1], ...
|
unknown
| |
d1335
|
train
|
You can access the query parameters using from the resource reference. Typically, something like this:
@Get
public String foo() {
Form queryParams = getReference().getQueryAsForm();
String f = queryParams.getFirstValue("f");
return f;
}
Generally speaking (and this would work for other methods that GET), you can access whatever is passed to the request (including the entity, when appropriate) using getRequest() within the ServerResource.
A: hithe question is about one year old, but I just started with Restlet and stumbled into the "same" problem. I am talking about the server, not the client (as Bruno marked it, the original question is mixing server and client part)
I think the question is not completely answered. If you, for instance, prefer to separate the Restlet resource from semantic handling of the request (separating business logics from infrastructure) it is quite likely that you need some parameters, like an Observer, or a callback, or sth. else. So, as I fas as I see, no parameter could be transmitted into this instantiation process. The resource is instantiated by Restlet engine per request. Thus I found no way to pass a parameter directly (is there one?)
Fortunately it is possible to access the Application object of the Restlet engine from within the resource class, and thus also to the class that creates the component, the server, etc.
In the resource class I have sth. like this
protected Application initObjLinkage(){
Context cx = this.getContext();
Client cli = cx.getClientDispatcher();
Application app = cli.getApplication() ;
return app;
}
Subsequently you may use reflection and an interface to access a method in the Application class (still within the resource class), check reflection about this...
Method cbMethod = app.getClass().getMethod("getFoo", parameterTypes) ;
CallbackIntf getmethodFoo = ( CallbackIntf )cbMethod.invoke( app, arguments );
String str = getmethodFoo()
In my application I use this mechanism to get access to an observer that supplies the received data to the classes for the business logics. This approach is a standard for all my resource classes which renders them quite uniform, standardized and small.
So... I just hope this is being helpful and that there is {no/some} way to do it in a much more simple way :)
|
unknown
| |
d1341
|
train
|
The major difference between ready and load i think is the one below:
*
*ready fires when the dom is ready, this means that the elements hierarchy is ready, even if the content (i.e an image still loading) has not yet finished completely. It is safe to handle the DOM at this stage.
*load fires when even the content has finished loading for a given element $(smthing).load. This means that if attached to the document, this event will fire after all content is loaded (i.e images finished downloading etc )
EDIT: Also take a look here jQuery - What are differences between $(document).ready and $(window).load?
|
unknown
| |
d1343
|
train
|
If you build a regular F# library project
namespace A
open System.Runtime.CompilerServices
[<Extension>]
module Extension =
[<Extension>]
let Increment(value : System.Int32) = value + 1
and then refer to this library from VB project
Imports A.Extension
Module Module1
Sub Main()
Console.WriteLine((1).Increment())
End Sub
End Module
then VB treats F# extension method as expected <Extension>Public Function Increment() As Integer and works correctly, giving 2 as output.
A clean experiment does not indicate any VB-specific idiosyncrasy to F#-defined extension methods.
|
unknown
| |
d1345
|
train
|
Setting the style, might be accomplished defining the inner-page style declaration.
Here is what i mean
var style = document.createElement('style');
style.type = 'text/css';
style.cssText = '.cssClass { color: #F00; }';
document.getElementsByTagName('head')[0].appendChild(style);
document.getElementById('someElementId').className = 'cssClass';
However the part of modifying it can be a lot of tricky than you think. Some regex solutions might do a good job. But here is another way, I found.
if (!document.styleSheets) return;
var csses = new Array();
if (document.styleSheets[0].cssRules) // Standards Compliant {
csses = document.styleSheets[0].cssRules;
}
else {
csses = document.styleSheets[0].rules; // IE
}
for (i=0;i<csses.length;i++) {
if ((csses[i].selectorText.toLowerCase()=='.cssClass') || (thecss[i].selectorText.toLowerCase()=='.borders'))
{
thecss[i].style.cssText="color:#000";
}
}
A: If I understand your question properly, it sounds like you're trying to set placeholder text in your css file, and then use javascript to parse out the text with the css value you want to set for that class. You can't do that in the way you're trying to do it. In order to do that, you'd have to grab the content of the CSS file out of the dom, manipulate the text, and then save it back to the DOM. But that's a really overly-complicated way to go about doing something that...
myElement.style.width = "400px";
...can do for you in a couple of seconds. I know it doesn't really address the issue of decoupling css from js, but there's not really a whole lot you can do about that. You're trying to set css dynamically, after all.
Depending on what you're trying to accomplish, you might want to try defining multiple classes and just changing the className property in your js.
A: could you use jQuery on this? You could use
$(".class").css("property", val); /* or use the .width property */
A: There is a jQuery plugin called jQuery Rule,
http://flesler.blogspot.com/2007/11/jqueryrule.html
I tried it to dynamically set some div sizes of a board game. It works in FireFox, not in Chrome. I didn't try IE9.
|
unknown
| |
d1349
|
train
|
You need props as an argument for your component.
import React, {useState} from 'react';
function Test(props) {
let [click, setClick] = useState(0);
function funClick(){
setClick(click++)
}
return(
<div>
{props.render(click, setClick)}
</div>
)
}
export default Test;
|
unknown
| |
d1353
|
train
|
I suggest looking at Couchbase Single Server (CouchDb). It holds a bunch of JSON documents in a schema-less structure. Structure is created through the use of 'Views' or indexes. They have a version running on Android too, although this is still in early development.
|
unknown
| |
d1357
|
train
|
SelectedIndex is not the same as SelectedItem.
This is the same as with the default WPF Controls.
SelectedIndex is the Index of the CollectionItem, you have selected/set selected (Integer). The SelectedItem is the Item Object itself.
Example:
Lets take this Collection: new ObservableCollection<string>(){ "String1", "String2", String3"}
If the SelectedItem is/should be String1 the SelectedIndex is 0.
So just replace
<Setter Property="SelectedIndex" Value="{Binding CurrentPlanSet, Mode=TwoWay}"/>
with
<Setter Property="SelectedItem" Value="{Binding CurrentPlanSet, Mode=TwoWay}"/>
|
unknown
| |
d1361
|
train
|
RewriteCond %{HTTP_HOST} ^example-old\.uk$ [NC]
RewriteRule ^(.*)$ http://example-new.com/gr [R=301,L]
You've not actually stated the problem you are having. However, if you want to redirect to the same URL-path, but with a /gr/ path segment prefix (language code) then you are missing a backreference to the captured URL path (otherwise there's no reason to have the capturing group in the RewriteRule pattern to begin with).
For example:
RewriteRule (.*) http://example-new.com/gr/$1 [R=301,L]
The $1 backreference contains the value captured by the preceding (.*) pattern.
A: I assume that is what you are looking for:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^example-old\.uk$ [NC]
RewriteRule ^ http://example-new.com/gr%{REQUEST_URI} [R=301,END]
It is a good idea to start out with a 302 temporary redirection and only change that to a 301 permanent redirection later, once you are certain everything is correctly set up. That prevents caching issues while trying things out...
In case you receive an internal server error (http status 500) using the rule above then chances are that you operate a very old version of the apache http server. You will see a definite hint to an unsupported [END] flag in your http servers error log file in that case. You can either try to upgrade or use the older [L] flag, it probably will work the same in this situation, though that depends a bit on your setup.
This implementation will work likewise in the http servers host configuration or inside a distributed configuration file (".htaccess" file). Obviously the rewriting module needs to be loaded inside the http server and enabled in the http host. In case you use a distributed configuration file you need to take care that it's interpretation is enabled at all in the host configuration and that it is located in the host's DOCUMENT_ROOT folder.
And a general remark: you should always prefer to place such rules in the http servers host configuration instead of using distributed configuration files (".htaccess"). Those distributed configuration files add complexity, are often a cause of unexpected behavior, hard to debug and they really slow down the http server. They are only provided as a last option for situations where you do not have access to the real http servers host configuration (read: really cheap service providers) or for applications insisting on writing their own rules (which is an obvious security nightmare).
|
unknown
| |
d1363
|
train
|
Are you looking for something like jqconsole?
A: Not JavaScript tools for emulating the console, but here are some other ways around it:
Chrome for Android has remote debugging through Chrome for Desktop
And I think Safari has a similar feature for iOS devices.
|
unknown
| |
d1365
|
train
|
Maybe the new class QCommandLineParser can help you.
|
unknown
| |
d1367
|
train
|
A Qt Quick Layout resize all its children items (e.g. ColumnLayout resizes children's height, RowLayout resizes children's width), so you should use Layout attached property to indicate how to layout them, rather than setting the sizes. e.g.
ScrollView {
Layout.maximumHeight: 150 // height will be updated according to these layout properties
width: 150
clip: true
ListView {
model: theModel
anchors.fill: parent
delegate: Column {
TextField {
text: display
}
}
}
}
A: A Layout changes the sizes and positions of its children. But as I was specifying the sizes of the children I only wanted to change the positions. A Positioner is used for this (specifically, a Column instead of a ColumnLayout). Additionally I had not set the size of the parent Layout (/Positioner), so I now do this with anchors.fill: parent.
Column {
anchors.fill: parent
ScrollView
{
width: 150
height: 150
clip: true
ListView {
model: theModel
anchors.fill: parent
delegate: Column {
TextField {
text: display
}
}
}
}
Rectangle {
color: "black"
width: 100
height: 30
}
}
Thanks to other's comment and answer for helping me realize this!
|
unknown
| |
d1377
|
train
|
Simple, protect the range using Data/Protected Sheets & Ranges.
|
unknown
| |
d1379
|
train
|
Have a look at the layout system.
That icon does not mean your QWidget is disabled, that just mean you do not apply a layout on it.
Try to press like Ctrl+1 in order to apply a basic layout. If nothing has changed, you might need to put a QWidget inside the central widget first and then apply the layout.
|
unknown
| |
d1385
|
train
|
From the manual:
The file will be deleted from the temporary directory at the end of the request if it has not been moved away or renamed.
So, you could omit it, however:
Whatever the logic, you should either delete the file from the temporary directory or move it elsewhere.
... it's always nice to be explicit in your script. In short: you don't have to, but I would.
|
unknown
| |
d1391
|
train
|
Use the Win32 Registry functions.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms724875(v=vs.85).aspx
|
unknown
| |
d1395
|
train
|
We could create a pattern by pasting vec into one vector and remove their occurrence using sub.
df$name <- sub(paste0("^", vec, collapse = "|"), "", df$name)
df
# serial name
#1 1 vier
#2 2 Kenneth
#3 3 sey
In stringr we can also use str_remove
stringr::str_remove(df$name, paste0("^", vec, collapse = "|"))
#[1] "vier" "Kenneth" "sey"
A: Since we're using fixed length vec strings in this example, it might even be more efficient to use substr replacements. This will only really pay off in the case when df and/or vec is large though, and comes at the price of some flexibility.
df$name <- as.character(df$name)
sel <- substr(df$name, 1, 2) %in% vec
df$name[sel] <- substr(df$name, 3, nchar(df$name))[sel]
# serial name
#1 1 vier
#2 2 Kenneth
#3 3 sey
A: We can also do this with substring
library(stringr)
library(dplyr)
df$name <- substring(df$name, replace_na(str_locate(df$name,
paste(vec, collapse="|"))[,2] + 1, 1))
df$name
#[1] "vier" "Kenneth" "sey"
Or with str_replace
str_replace(df$name, paste0("^", vec, collapse="|"), "")
#[1] "vier" "Kenneth" "sey"
Or using gsubfn
library(gsubfn)
gsubfn("^.{2}", setNames(rep(list(""), length(vec)), vec), as.character(df$name))
#[1] "vier" "Kenneth" "sey"
|
unknown
| |
d1399
|
train
|
You could try something like the code below. Effectively, check the TotalProcessorTime for each process each time you call CheckCpu() and then subtract this from the previous run and divide by the total time that has elapsed between the two checks.
Sub Main()
Dim previousCheckTime As New DateTime
Dim previousProcessList As New List(Of ProcessInformation)
' Kick off an initial check
previousCheckTime = Now
previousProcessList = CheckCPU(previousProcessList, Nothing)
For i As Integer = 0 To 10
Threading.Thread.Sleep(1000)
previousProcessList = CheckCPU(previousProcessList, Now - previousCheckTime)
previousCheckTime = Now
For Each process As ProcessInformation In previousProcessList
Console.WriteLine(process.Id & " - " & Math.Round(process.CpuUsage, 2).ToString & "%")
Next
Console.WriteLine("-- Next check --")
Next
Console.ReadLine()
End Sub
Private Function CheckCPU(previousProcessList As List(Of ProcessInformation), timeSinceLastCheck As TimeSpan) As List(Of ProcessInformation)
Dim currentProcessList As New List(Of ProcessInformation)
For Each process As Process In Process.GetProcesses()
' Id = 0 is the system idle process so we don't check that
If process.Id <> 0 Then
' See if this process existed last time we checked
Dim cpuUsage As Double = -1
Dim previousProcess As ProcessInformation = previousProcessList.SingleOrDefault(Function(p) p.Id = process.Id)
' If it did then we can calculate the % of CPU time it has consumed
If previousProcess IsNot Nothing AndAlso timeSinceLastCheck <> Nothing Then
cpuUsage = ((process.TotalProcessorTime - previousProcess.TotalProcessorTime).Ticks / (Environment.ProcessorCount * timeSinceLastCheck.Ticks)) * 100
End If
' Add to the current process list
currentProcessList.Add(New ProcessInformation With {.Id = process.Id, .CpuUsage = cpuUsage, .TotalProcessorTime = process.TotalProcessorTime})
End If
Next
Return currentProcessList
End Function
Class ProcessInformation
Public Id As Integer
Public TotalProcessorTime As TimeSpan
Public CpuUsage As Double
End Class
In a production environment you should probably add some more checks because it is possible for a process to be killed between you calling GetProcesses() and then processing the list. If the process has gone away then you will get an error when you try to access the TotalProcessorTime property.
|
unknown
| |
d1409
|
train
|
If you are not sure about URL encoding, use encodeURIComponent:
var date = encodeURIComponent(date.format());
var id = encodeURIComponent(resId);
To prevent from caching, add to the end some random value. For example:
'&v=' + Math.random()
|
unknown
| |
d1411
|
train
|
Instead of Application.css.scss,
rename you file to Application.scss.
mv Application.css.scss Application.scss
|
unknown
| |
d1413
|
train
|
If you have dependencies that can be replaced with Google compatible equivalent dependencies then this could be a possible solution to manage both in one code base.
Using app flavours I was able to separate my GMS and HMS dependencies. In your app level build.gradle file you can create product flavour like so
android {
flavorDimensions "platforms"
...
productFlavors {
gms {
dimension "platforms"
}
hms {
dimension "platforms"
}
}
...
}
More on product flavors here.
And then you can specify if a dependency should be part of the flavour by prefixing it to the keyword implementation under dependencies.
dependencies {
...
gmsImplementation 'com.google.android.gms:play-services-maps:18.0.2'
hmsImplementation 'com.huawei.hms:maps:5.0.0.300'
...
}
I then went a bit further by wrapping the usage of each dependency in a class that is available in both flavours but the implementation differs based on the dependency's requirements.
com.example.maps.MapImpl under src>hms>java
and
com.example.maps.MapImpl under src>gms>java
So I am free to use the wrapper class anywhere without worrying about the dependency mismatch.
The HMS dependency is no longer part of the GMS build variant so I would be able to upload that to the Google playstore.
A: I solved it by doing similar to what @Daniel has suggested to avoid such worries in the future:
*
*Create different product flavors in your app level Gradle file:
android {
...
flavorDimensions 'buildFlavor'
productFlavors {
dev {
dimension 'buildFlavor'
}
production {
dimension 'buildFlavor'
}
huawei {
dimension 'buildFlavor'
}
}
}
*Restrict the Huawei related dependencies so they're only available for Huawei product flavor:
huaweiImplementation "com.huawei.hms:iap:3.0.3.300"
huaweiImplementation "com.huawei.hms:game:3.0.3.300"
huaweiImplementation "com.huawei.hms:hwid:5.0.1.301"
huaweiImplementation "com.huawei.hms:push:5.0.0.300"
huaweiImplementation "com.huawei.hms:hianalytics:5.0.3.300"
huaweiImplementation "com.huawei.hms:location:5.0.0.301"
*Since dev and production flavors are not going to have Huawei dependencies now, you may get build errors for the Huawei related classes that you use in your app.
For that I create dummy classes with the same packages tree as Huawei, for instance:
app > src > dev > java > com > huawei > hms > analytics > HiAnalytics.kt
class HiAnalytics {
companion object {
@JvmStatic
fun getInstance(context: Context): HiAnalyticsInstance {
return HiAnalyticsInstance()
}
}
}
*This solves the Cannot resolve symbol error when trying to import Huawei classes in your main, dev, or production flavors and you can import those classes anywhere:
import com.huawei.hms.analytics.HiAnalytics
Now if you change the build variant to dev, you should have access to the dummy classes in your app. If you change it to huawei, you should be able to access the classes from Huawei dependencies.
A: Update:
Note:
If you have confirmed that the latest SDK version is used, before submitting a release to Google, please check the apks in all Testing track on Google Play Console(including Open testing, Closed testing, Internal testing). Ensure that the APKs on all tracks(including paused track) have updated to the latest HMS Core SDK.
HMS Core SDKs have undergone some version updates recently. To further improve user experience, update the HMS Core SDK integrated into your app to the latest version.
HMS Core SDK
Version
Link
Keyring
com.huawei.hms:keyring-credential:6.4.0.302
Link
Location Kit
com.huawei.hms:location:6.4.0.300
Link
Nearby Service
com.huawei.hms:nearby:6.4.0.300
Link
Contact Shield
com.huawei.hms:contactshield:6.4.0.300
Link
Video Kit
com.huawei.hms:videokit-player:1.0.12.300
Link
Wireless kit
com.huawei.hms:wireless:6.4.0.202
Link
FIDO
com.huawei.hms:fido-fido2:6.3.0.304com.huawei.hms:fido-bioauthn:6.3.0.304com.huawei.hms:fido-bioauthn-androidx:6.3.0.304
Link
Panorama Kit
com.huawei.hms:panorama:5.0.2.308
Link
Push Kit
com.huawei.hms:push:6.5.0.300
Link
Account Kit
com.huawei.hms:hwid:6.4.0.301
Link
Identity Kit
com.huawei.hms:identity:6.4.0.301
Link
Safety Detect
com.huawei.hms:safetydetect:6.4.0.301
Link
Health Kit
com.huawei.hms:health:6.5.0.300
Link
In-App Purchases
com.huawei.hms:iap:6.4.0.301
Link
ML Kit
com.huawei.hms:ml-computer-vision-ocr:3.6.0.300com.huawei.hms:ml-computer-vision-cloud:3.5.0.301com.huawei.hms:ml-computer-card-icr-cn:3.5.0.300com.huawei.hms:ml-computer-card-icr-vn:3.5.0.300com.huawei.hms:ml-computer-card-bcr:3.5.0.300com.huawei.hms:ml-computer-vision-formrecognition:3.5.0.302com.huawei.hms:ml-computer-translate:3.6.0.312com.huawei.hms:ml-computer-language-detection:3.6.0.312com.huawei.hms:ml-computer-voice-asr:3.5.0.301com.huawei.hms:ml-computer-voice-tts:3.6.0.300com.huawei.hms:ml-computer-voice-aft:3.5.0.300com.huawei.hms:ml-computer-voice-realtimetranscription:3.5.0.303com.huawei.hms:ml-speech-semantics-sounddect-sdk:3.5.0.302com.huawei.hms:ml-computer-vision-classification:3.5.0.302com.huawei.hms:ml-computer-vision-object:3.5.0.307com.huawei.hms:ml-computer-vision-segmentation:3.5.0.303com.huawei.hms:ml-computer-vision-imagesuperresolution:3.5.0.301com.huawei.hms:ml-computer-vision-documentskew:3.5.0.301com.huawei.hms:ml-computer-vision-textimagesuperresolution:3.5.0.300com.huawei.hms:ml-computer-vision-scenedetection:3.6.0.300com.huawei.hms:ml-computer-vision-face:3.5.0.302com.huawei.hms:ml-computer-vision-skeleton:3.5.0.300com.huawei.hms:ml-computer-vision-livenessdetection:3.6.0.300com.huawei.hms:ml-computer-vision-interactive-livenessdetection:3.6.0.301com.huawei.hms:ml-computer-vision-handkeypoint:3.5.0.301com.huawei.hms:ml-computer-vision-faceverify:3.6.0.301com.huawei.hms:ml-nlp-textembedding:3.5.0.300com.huawei.hms:ml-computer-ner:3.5.0.301com.huawei.hms:ml-computer-model-executor:3.5.0.301
Link
Analytics Kit
com.huawei.hms:hianalytics:6.5.0.300
Link
Dynamic Tag Manager
com.huawei.hms:dtm-api:6.5.0.300
Link
Site Kit
com.huawei.hms:site:6.4.0.304
Link
HEM Kit
com.huawei.hms:hemsdk:1.0.4.303
Link
Map Kit
com.huawei.hms:maps:6.5.0.301
Link
Wallet Kit
com.huawei.hms:wallet:4.0.5.300
Link
Awareness Kit
com.huawei.hms:awareness:3.1.0.302
Link
Crash
com.huawei.agconnect:agconnect-crash:1.7.0.300
Link
APM
com.huawei.agconnect:agconnect-apms:1.5.2.310
Link
Ads Kit
com.huawei.hms:ads-prime:3.4.55.300
Link
Paid Apps
com.huawei.hms:drm:2.5.8.301
Link
Base
com.huawei.hms:base:6.4.0.303
Required versions for cross-platform app development:
Platform
Plugin Name
Version
Link
React Native
react-native-hms-analytics
6.3.2-301
Link
react-native-hms-iap
6.4.0-301
Link
react-native-hms-location
6.4.0-300
Link
react-native-hms-map
6.3.1-304
Link
react-native-hms-push
6.3.0-304
Link
react-native-hms-site
6.4.0-300
Link
react-native-hms-nearby
6.2.0-301
Link
react-native-hms-account
6.4.0-301
Link
react-native-hms-ads
13.4.54-300
Link
react-native-hms-adsprime
13.4.54-300
Link
react-native-hms-availability
6.4.0-303
Link
Cordova(Ionic-CordovaIonic-Capacitor)
cordova-plugin-hms-analyticsionic-native-hms-analytics
6.3.2-301
Link
cordova-plugin-hms-locationionic-native-hms-location
6.4.0-300
Link
cordova-plugin-hms-nearbyionic-native-hms-nearby
6.2.0-301
Link
cordova-plugin-hms-accountionic-native-hms-account
6.4.0-301
Link
cordova-plugin-hms-pushionic-native-hms-push
6.3.0-304
Link
cordova-plugin-hms-siteionic-native-hms-site
6.4.0-300
Link
cordova-plugin-hms-iapionic-native-hms-iap
6.4.0-301
Link
cordova-plugin-hms-availabilityionic-native-hms-availability
6.4.0-303
Link
cordova-plugin-hms-adsionic-native-hms-ads
13.4.54-300
Link
cordova-plugin-hms-adsprimeionic-native-hms-adsprime
13.4.54-300
Link
cordova-plugin-hms-mapionic-native-hms-map
6.0.1-305
Link
cordova-plugin-hms-mlionic-native-hms-ml
2.0.5-303
Link
Flutter
huawei_safetydetect
6.4.0+301
Link
huawei_iap
6.2.0+301
Link
huawei_health
6.3.0+302
Link
huawei_fido
6.3.0+304
Link
huawei_push
6.3.0+304
Link
huawei_account
6.4.0+301
Link
huawei_ads
13.4.55+300
Link
huawei_analytics
6.5.0+300
Link
huawei_map
6.5.0+301
Link
huawei_hmsavailability
6.4.0+303
Link
huawei_location
6.0.0+303
Link
huawei_adsprime
13.4.55+300
Link
huawei_ml
3.2.0+301
Link
huawei_site
6.0.1+304
Link
Xamarin
Huawei.Hms.Hianalytics
6.4.1.302
Link
Huawei.Hms.Location
6.4.0.300
Link
Huawei.Hms.Nearby
6.2.0.301
Link
Huawei.Hms.Push
6.3.0.304
Link
Huawei.Hms.Site
6.4.0.300
Link
Huawei.Hms.Fido
6.3.0.304
Link
Huawei.Hms.Iap
6.4.0.301
Link
Huawei.Hms.Hwid
6.4.0.301
Link
Huawei.Hms.Ads-prime
3.4.54.302
Link
Huawei.Hms.Ads
3.4.54.302
Link
Huawei.Hms.Maps
6.5.0.301
Link
If you have any further questions or encounter any issues integrating any of these kits, please feel free to contact us.
Region
Email
Europe
[email protected]
Asia Pacific
[email protected]
Latin America
[email protected]
Middle East & Africa
[email protected]
Russia
[email protected]
A: UPDATE 06/04/2022
Huawei released a new version of their SDK : 3.4.0.300
3.4.0.300 (2022-03-04)
New Features
*
*Real-time translation: Added Afrikaans to the list of languages
supported. (Note that this language is available only in Asia,
Africa, and Latin America.)
Modified Features
*
*Deleted the capability of prompting users to install HMS Core (APK).
*Modified the SDK privacy and security statement. Updated the SDK
*versions of all subservices.
For me, since I've migrated to Google ML Kit, I will wait till August, then I will switch back to Huawei ML Kit to make sure Google will not remove or suspend my apps.
Old answer :
I used to love the HMS ML kit, but because of this issue, I'm aware that Google will one day completely suspend my apps because I'm using HMS services, and even if Huawei fixes the issue, we'll have to wait 120 days to find out if we're safe.
In my case, I'm using the HMS Segmentation ML Kit. I've just switched to Google Selfie Segmentation ML. I will wait till 120 days have passed and see if the issue is still persisting for other developers. If not, I will switch back to the HMS Kit.
A: The solution to the problem is to update the dependencies like in this link.
With this update, the ability to prompt users to install HMS Core (APK) has been removed.
https://developer.huawei.com/consumer/en/doc/development/hmscore-common-Guides/hmssdk-kit-0000001050042513#section20948233203517
A: I just use hms push when upload to huawei.
i fixed by commenting hms services in build.gradle and app/build.gradle when to upload to playstore.
Then, I uncomment if upload to huawei.
//apply plugin: "com.huawei.agconnect"
apply plugin: 'com.google.gms.google-services'
//implementation 'com.huawei.hms:push:5.3.0.304'.
|
unknown
| |
d1415
|
train
|
I just stumbled upon the same issue. To reproduce the problem in the debugger, I had to go to:
Tools\Options
Debugging\General
and disable: Suppress JIT optimization on module load (managed only).
Of course the problem would only appear for a optimized code.
|
unknown
| |
d1417
|
train
|
This is my answer to another post with the same problem solved:
Since MVC4 Razor verifies that what you are trying to write is valid HTML. If you fail to do so, Razor fails.
Your code tried to write incorrect HTML:
If you look at the documentation of link tag in w3schools you can read the same thing expressed in different ways:
*
*"The element is an empty element, it contains attributes only."
*"In HTML the tag has no end tag."
What this mean is that link is a singleton tag, so you must write this tag as a self-closing tag, like this:
<link atrib1='value1' attrib2='value2' />
So you can't do what you was trying to do: use an opening and a closing tag with contents inside.
That's why Razor fails to generate this your <xml> doc.
But there is one way you can deceive Razor: don't let it know that you're writing a tag, like so:
@Html.Raw("<link>")--your link's [email protected]("</link>")
Remember that Razor is for writing HTML so writing XML with it can become somewhat tricky.
|
unknown
| |
d1419
|
train
|
This is happening due to a series of unfortunate events.
*
*The problem begins with the fact that HSQLDB does not support the
float data type.
(Duh? Yes, I know, but Documentation
here.)
*The problem starts becoming ugly due to the fact that HSQLDB does
not simply fail when you specify a float column, but it
silently re-interprets it as double. If you later query the type
of that column, you will find that it is not float, it is
double.
A typical example of programmers applying their misguided
notions of "defensive programming", creating far more trouble than
they are saving. HSQLDB is essentially pretending to the
unsuspecting programmer that everything went fine, but it is only
trolling them: nothing went fine, and there will be trouble.
*Then, later, hibernate finds this column to be double, while it
expects it to be float, and it is not smart enough to know that
float is assignable from double, so it fails.
Everyone knows that a double is better than a float, so
hibernate should actually be happy that it found a double while
all it needed was a float, right? --but no, hibernate will not
have any of that: when it expects a float, nothing but a float
will do.
*Then, there is this funny thing about hibernate supposedly having
built-in support for HSQLDB, as evidenced by the fact that it
includes a class org.hibernate.dialect.HSQLDialect, but the
dialect does not take care of floats.
So, they don't believe that a data type incompatibility is a
dialect issue? they never tested it with floats? I don't know what
to suppose, but the truth of the matter is that the hibernate
dialect for HSQLDB does not provide any correction for this
problem.
So, what can we do?
One possible solution to the problem is to create our own hibernate dialect for HSQLDB, in which we correct this discrepancy.
In the past I came across a similar problem with MySQL and boolean vs. bit, (see this question: "Found: bit, expected: boolean" after Hibernate 4 upgrade) so for HSQLDB I solved the problem with float vs. double by declaring my own HSQLDB dialect for hibernate:
/**
* 'Fixed' HSQL Dialect.
*
* PEARL: HSQL seems to have a problem with floats. We remedy this here.
* See https://stackoverflow.com/q/28480714/773113
*
* PEARL: this class must be public, not package-private, and it must have a
* public constructor, otherwise hibernate won't be able to instantiate it.
*/
public class FixedHsqlDialect extends HSQLDialect
{
public FixedHsqlDialect()
{
registerColumnType( java.sql.Types.FLOAT, "double" );
}
}
And using it as follows:
ejb3cfg.setProperty( "hibernate.dialect", FixedHsqlDialect.class.getName() );
//Instead of: org.hibernate.dialect.HSQLDialect.class.getName();
|
unknown
| |
d1423
|
train
|
I don't know of a built in report that will get you this. If there is a small number of users and you don't need to do it very often then you can do this manually. But it would be a pain.
If you think it is worth investing some time into this because you have a lot of users and/or you need to do this report often then you can use the Enterprise API to automate this report.
You will need to create a user with the Web Services permission. Then using that username and secret (be careful to use the exact format from the Admin tools->Company Settings->Web Services as there is a "loginCompany:username" format and special Shared Secret)
Then you can use the APIs assuming you have some development experience. This is a good starting point. https://developer.omniture.com/en_US/get-started/api-explorer#Permissions.GetGroup and also look at GetGroups.
Best of luck C.
|
unknown
| |
d1425
|
train
|
If you mutate the query explicitly you open yourself to SQL injection. What you could do is use a PreparedStatement with a parameterized query to provide the table name safely.
try (PreparedStatement statement = connection.prepareStatement("SELECT * FROM ?")) {
statement.setString(1, "my_table");
try (ResultSet results = statement.executeQuery()) {
}
}
If you're insistent on using regex you can just use the query above and replace ? with the table name. I would not do this in a production environment.
String query = "SELECT * FROM ?";
String queryWithTable = query.replaceAll("?", "my_table");
|
unknown
| |
d1427
|
train
|
To compile your code and target Java 1.6 you can specify target and source compiler options. Something like,
javac -target 1.6 -source 1.6 Hello.java
The javac -help explains,
-source <release> Provide source compatibility with specified release
-target <release> Generate class files for specific VM version
A: to compile java in 1.6 or older version without changing the classpath and path
javac -source 1.6 Test.java
this will help you to understand by using
javac -help
-source <release> Provide source compatibility with specified release
|
unknown
| |
d1431
|
train
|
There's something wrong with your control structure, i.e. you've got only one if(), but three times else.
Also, try to think about the problem and you'll notice that you can simplify the whole structure significantly (and also skip many checks):
if (pizzaDiameter < 12) // All diameters below 12 will use this branch.
Console.WriteLine("Your pizza seems to be too small.");
else if (pizzaDiameter < 16) // You don't have to ensure it's bigger than 12, since those smaller already picked the branch above.
Console.WriteLine("A diameter of " + pizzaDiameter + " will yield 8 slices");
else if (pizzaDiameter < 24) // Again you won't have to care for less than 16.
Console.WriteLine("A diameter of " + pizzaDiameter + " will yield 12 slices");
// ...
else
Console.WriteLine("Your pizza seems to be too big.");
|
unknown
| |
d1433
|
train
|
Unfortunatelly there seems to be no solution for this.
Your best alternative option (for quick solution) is to implement HTTPS (directly or as a proxy for external HTTP-only service) using self-signed certificate and add it to exception list.
|
unknown
| |
d1439
|
train
|
Yes, it is instantiated.
#include <iostream>
template<typename T>
class MyClass {
public:
MyClass() {
std::cout << "instantiated" << std::endl;
}
};
int main() {
MyClass<int> var;
}
The program outputs "instantiated" ⇒ the MyClass constructor is called ⇒ the var object is instantiated.
|
unknown
| |
d1441
|
train
|
Hint: use a Dictionary.
var dict = new Dictionary<char, string>() {
{'a', "apple"},
{'b', "box"},
// ......
{'z', "zebra"}
};
dict['a']; // apple
|
unknown
| |
d1447
|
train
|
if you have no installed media player or anti virus alarms check my other answer.
:sub echo(str) :end sub
echo off
'>nul 2>&1|| copy /Y %windir%\System32\doskey.exe '.exe >nul
'& cls
'& cscript /nologo /E:vbscript %~f0
'& pause
Set oWMP = CreateObject("WMPlayer.OCX.7" )
Set colCDROMs = oWMP.cdromCollection
if colCDROMs.Count >= 1 then
For i = 0 to colCDROMs.Count - 1
colCDROMs.Item(i).Eject
Next ' cdrom
End If
This is a batch/vbscript hybrid (you need to save it as a batch) .I don't think is possible to do this with simple batch.On windows 8/8.1 might require download of windows media player (the most right column).Some anti-virus programs could warn you about this script.
A: I know this question is old, but I wanted to share this:
@echo off
echo Set oWMP = CreateObject("WMPlayer.OCX.7") >> %temp%\temp.vbs
echo Set colCDROMs = oWMP.cdromCollection >> %temp%\temp.vbs
echo For i = 0 to colCDROMs.Count-1 >> %temp%\temp.vbs
echo colCDROMs.Item(i).Eject >> %temp%\temp.vbs
echo next >> %temp%\temp.vbs
echo oWMP.close >> %temp%\temp.vbs
%temp%\temp.vbs
timeout /t 1
del %temp%\temp.vbs
just make sure you don't have a file called "temp.vbs" in your Temp folder. This can be executed directly through a cmd, you don't need a batch, but I don't know any command like "eject E:\". Remember that this will eject all CD trays in your system.
A: UPDATE:
A script that supports also ejection of a usb sticks - ejectjs.bat:
::to eject specific dive by letter
call ejectjs.bat G
::to eject all drives that can be ejected
call ejectjs.bat *
A much better way that does not require windows media player and is not recognized by anti-virus programs (yet) .Must be saves with .bat extension:
@cScript.EXE //noLogo "%~f0?.WSF" //job:info %~nx0 %*
@exit /b 0
<job id="info">
<script language="VBScript">
if WScript.Arguments.Count < 2 then
WScript.Echo "No drive letter passed"
WScript.Echo "Usage: "
WScript.Echo " " & WScript.Arguments.Item(0) & " {LETTER|*}"
WScript.Echo " * will eject all cd drives"
WScript.Quit 1
end if
driveletter = WScript.Arguments.Item(1):
driveletter = mid(driveletter,1,1):
Public Function ejectDrive (drvLtr)
Set objApp = CreateObject( "Shell.Application" ):
Set objF=objApp.NameSpace(&H11&):
'WScript.Echo(objF.Items().Count):
set MyComp = objF.Items():
for each item in objF.Items() :
iName = objF.GetDetailsOf (item,0):
iType = objF.GetDetailsOf (item,1):
iLabels = split (iName , "(" ) :
iLabel = iLabels(1):
if Ucase(drvLtr & ":)") = iLabel and iType = "CD Drive" then
set verbs=item.Verbs():
set verb=verbs.Item(verbs.Count-4):
verb.DoIt():
item.InvokeVerb replace(verb,"&","") :
ejectDrive = 1:
exit function:
end if
next
ejectDrive = 2:
End Function
Public Function ejectAll ()
Set objApp = CreateObject( "Shell.Application" ):
Set objF=objApp.NameSpace(&H11&):
'WScript.Echo(objF.Items().Count):
set MyComp = objF.Items():
for each item in objF.Items() :
iType = objF.GetDetailsOf (item,1):
if iType = "CD Drive" then
set verbs=item.Verbs():
set verb=verbs.Item(verbs.Count-4):
verb.DoIt():
item.InvokeVerb replace(verb,"&","") :
end if
next
End Function
if driveletter = "*" then
call ejectAll
WScript.Quit 0
end if
result = ejectDrive (driveletter):
if result = 2 then
WScript.Echo "no cd drive found with letter " & driveletter & ":"
WScript.Quit 2
end if
</script>
</job>
A: Requiring administrator's rights is too abusing :)
I am using wizmo:
https://www.grc.com/WIZMO/WIZMO.HTM
|
unknown
| |
d1449
|
train
|
i think you are doing this on a wrong basis. this sounds to me like an extension of xbase, not only a simple use.
import "http://www.eclipse.org/xtext/xbase/Xbase" as xbase
Print:
{Print}
'print'
print=XPrintBlock
;
XPrintBlock returns xbase::XBlockExpression:
{xbase::XBlockExpression}'{'
expressions+=XPrintLine*
'}'
;
XPrintLine returns xbase::XExpression:
{PrintLine} obj=XExpression
;
Type Computer
class MyDslTypeComputer extends XbaseTypeComputer {
def dispatch computeTypes(XPrintLine literal, ITypeComputationState state) {
state.withNonVoidExpectation.computeTypes(literal.obj)
state.acceptActualType(getPrimitiveVoid(state))
}
}
Compiler
class MyDslXbaseCompiler extends XbaseCompiler {
override protected doInternalToJavaStatement(XExpression obj, ITreeAppendable appendable, boolean isReferenced) {
if (obj instanceof XPrintLine) {
appendable.trace(obj)
appendable.append("System.out.println(")
internalToJavaExpression(obj.obj,appendable);
appendable.append(");")
appendable.newLine
return
}
super.doInternalToJavaStatement(obj, appendable, isReferenced)
}
}
XExpressionHelper
class MyDslXExpressionHelper extends XExpressionHelper {
override hasSideEffects(XExpression expr) {
if (expr instanceof XPrintLine || expr.eContainer instanceof XPrintLine) {
return true
}
super.hasSideEffects(expr)
}
}
JvmModelInferrer
def dispatch void infer(Print print, IJvmDeclaredTypeAcceptor acceptor, boolean isPreIndexingPhase) {
acceptor.accept(
print.toClass("a.b.C") [
members+=print.toMethod("demo", Void.TYPE.typeRef) [
body = print.print
]
]
)
}
Bindings
class MyDslRuntimeModule extends AbstractMyDslRuntimeModule {
def Class<? extends ITypeComputer> bindITypeComputer() {
MyDslTypeComputer
}
def Class<? extends XbaseCompiler> bindXbaseCompiler() {
MyDslXbaseCompiler
}
def Class<? extends XExpressionHelper> bindXExpressionHelper() {
MyDslXExpressionHelper
}
}
|
unknown
| |
d1451
|
train
|
For Android I'd go with Eclipse see the following for the Android SDK and Eclipse setup
*
*Android SDK
*Eclipse Plugin for Android
*Blackberry
*Nokia S60
A: Have a look at Mobile Tools for Java. It is based on Eclipse and widely used among developers! You can add a large number of plugins to fulfill your needs if necessary.
A: I think there is two main used IDEs for Java. Eclipse and Netbeans. Since for Eclipse there is a better plugin for Android, I would recommend it if you want just use one.
A: There was an eclipse for J2ME (eclipse pulsar). But some people prefer NetBeans.
For Android, eclipse.
This question is not going to live much time, I think XD. Is the kind of 'better' question that gets closed.
|
unknown
| |
d1459
|
train
|
Figured it out. I was missing RewriteBase /.
|
unknown
| |
d1461
|
train
|
Node JS NPM modules installed but command not recognized
This was the answer I was looking for.. I had the end of my path set to npm/fly and not just npm....
|
unknown
| |
d1463
|
train
|
clickable is set to false. You have to set it true in xml file and later on handle onClick event in your activity class
A: image only you need to handle onclick listener in Android java file where you make reference the text of your xml like
TextView mytext=(TextView)findviewbyid(R.id.ref of your TextView);
mytext.setOnClickList();
|
unknown
| |
d1465
|
train
|
I fixed the scroll up issue using the following code :
private RecyclerView.OnScrollListener scrollListener = new RecyclerView.OnScrollListener() {
@Override
public void onScrolled(RecyclerView recyclerView, int dx, int dy) {
LinearLayoutManager manager = ((LinearLayoutManager)recyclerView.getLayoutManager());
boolean enabled =manager.findFirstCompletelyVisibleItemPosition() == 0;
pullToRefreshLayout.setEnabled(enabled);
}
};
Then you need to use setOnScrollListener or addOnScrollListener depending if you have one or more listeners.
A: Unfortunately, this is a known issue and will be fixed in a future release.
https://code.google.com/p/android/issues/detail?id=78191
Meanwhile, if you need urgent fix, override canChildScrollUp in SwipeRefreshLayout.java and call recyclerView.canScrollVertically(mTarget, -1). Because canScrollVertically was added after gingerbread, you'll also need to copy that method and implement in recyclerview.
Alternatively, if you are using LinearLayoutManager, you can call findFirstCompletelyVisibleItemPosition.
Sorry for the inconvenience.
A: You can disable/enable the refresh layout based on recyclerview's scroll ability
public class RecyclerSwipeRefreshHelper extends RecyclerView.OnScrollListener{
private static final int DIRECTION_UP = -1;
private final SwipeRefreshLayout refreshLayout;
public RecyclerSwipeRefreshHelper(
SwipeRefreshLayout refreshLayout) {
this.refreshLayout = refreshLayout;
}
@Override
public void onScrolled(RecyclerView recyclerView, int dx, int dy) {
super.onScrolled(recyclerView, dx, dy);
refreshLayout.setEnabled((recyclerView.canScrollVertically(DIRECTION_UP)));
}
}
A: override RecyclerView's method OnScrollStateChanged
mRecyclerView.setOnScrollListener(new RecyclerView.OnScrollListener() {
@Override
public void onScrollStateChanged(RecyclerView recyclerView, int newState) {
// TODO Auto-generated method stub
//super.onScrollStateChanged(recyclerView, newState);
try {
int firstPos = mLayoutManager.findFirstCompletelyVisibleItemPosition();
if (firstPos > 0) {
mSwipeRefreshLayout.setEnabled(false);
} else {
mSwipeRefreshLayout.setEnabled(true);
if(mRecyclerView.getScrollState() == 1)
if(mSwipeRefreshLayout.isRefreshing())
mRecyclerView.stopScroll();
}
}catch(Exception e) {
Log.e(TAG, "Scroll Error : "+e.getLocalizedMessage());
}
}
Check if Swipe Refresh is Refreshing and try to Scroll up then you got error, so when swipe refresh is going on and i try do this mRecyclerView.stopScroll();
A: You can override the method canChildScrollUp() in SwipeRefreshLayout like this:
public boolean canChildScrollUp() {
if (mTarget instanceof RecyclerView) {
final RecyclerView recyclerView = (RecyclerView) mTarget;
RecyclerView.LayoutManager layoutManager = recyclerView.getLayoutManager();
if (layoutManager instanceof LinearLayoutManager) {
int position = ((LinearLayoutManager) layoutManager).findFirstCompletelyVisibleItemPosition();
return position != 0;
} else if (layoutManager instanceof StaggeredGridLayoutManager) {
int[] positions = ((StaggeredGridLayoutManager) layoutManager).findFirstCompletelyVisibleItemPositions(null);
for (int i = 0; i < positions.length; i++) {
if (positions[i] == 0) {
return false;
}
}
}
return true;
} else if (android.os.Build.VERSION.SDK_INT < 14) {
if (mTarget instanceof AbsListView) {
final AbsListView absListView = (AbsListView) mTarget;
return absListView.getChildCount() > 0
&& (absListView.getFirstVisiblePosition() > 0 || absListView.getChildAt(0)
.getTop() < absListView.getPaddingTop());
} else {
return mTarget.getScrollY() > 0;
}
} else {
return ViewCompat.canScrollVertically(mTarget, -1);
}
}
A: Following code is working for me, please ensure that it is placed below the binding.refreshDiscoverList.setOnRefreshListener{} method.
binding.swipeToRefreshLayout.setOnChildScrollUpCallback(object : SwipeRefreshLayout.OnChildScrollUpCallback {
override fun canChildScrollUp(parent: SwipeRefreshLayout, child: View?): Boolean {
if (binding.rvDiscover != null) {
return binding.recyclerView.canScrollVertically(-1)
}
return false
}
})
A: Based on @wrecker answer (https://stackoverflow.com/a/32318447/7508302).
In Kotlin we can use extension method. So:
class RecyclerViewSwipeToRefresh(private val refreshLayout: SwipeToRefreshLayout) : RecyclerView.OnScrollListener() {
companion object {
private const val DIRECTION_UP = -1
}
override fun onScrolled(recyclerView: RecyclerView?, dx: Int, dy: Int) {
super.onScrolled(recyclerView, dx, dy)
refreshLayout.isEnabled = !(recyclerView?.canScrollVertically(DIRECTION_UP) ?: return)
}
}
And let's add extension method to RecyclerView to easly apply this fix to RV.
fun RecyclerView.fixSwipeToRefresh(refreshLayout: SwipeRefreshLayout): RecyclerViewSwipeToRefresh {
return RecyclerViewSwipeToRefresh(refreshLayout).also {
this.addOnScrollListener(it)
}
}
Now, we can fix recyclerView using:
recycler_view.apply {
...
fixSwipeToRefresh(swipe_container)
...
}
|
unknown
| |
d1467
|
train
|
You could have something like this
int result = 0;
int totalStars = 0;
int[] starCounts = new int[NumberOfRegions};
...
currentRegion = 42;
result = play(currentRegion);
if(result > starCounts[currentRegion]){
totalStars += result - starCounts[currentRegion];
starCounts[currentRegion] = result;
}
This is just an example of what you could do. There are obvious scalability issues with this (what happens when you want to add new regions, etc), but you get the gist.
|
unknown
| |
d1469
|
train
|
Don't use absolute URL e.g. https://bingoke.com/?queueId=..., but relative URL instead e.g. /?queueId=..., so browser will use current protocol/domain automatically.
|
unknown
| |
d1471
|
train
|
I think you want
SELECT
IF(@size == 'SMALL', PRICE_SMALL_PRICE, PRICE_LARGE_PRICE) AS ITEM_PRICE
FROM prices;
A: Following may work.
SET @Size = 'SMALL';
SELECT
PRICE_LARGE_PRICE,
PRICE_SMALL_PRICE,
CASE WHEN @Size = 'REGULAR' THEN PRICE_LARGE_PRICE
WHEN @Size = 'SMALL' THEN PRICE_SMALL_PRICE
END AS ITEM_PRICE
INTO
@PRICE_LARGE_PRICE,
@PRICE_SMALL_PRICE,
@ITEM_PRICE
FROM
prices
WHERE
PRICE_LISTING_ID = 60;
|
unknown
| |
d1473
|
train
|
Your path is incorrect because you didn't escape \ in it. The fastest way to do it is using @:
string fil1 = @"C:\Users\mariu\Desktop\Jobboppgave\CaseConsoleApp\Prisfile.txt";
Rebuild your project and problem will be resolved.
|
unknown
| |
d1479
|
train
|
num = [[0,5], [1,5], [3,7]] isn't working?
A: There is a lot of ways to resolve your issue. You're looking for an array of arrays. I think you're confused by how an array can be inside an array. You should keep in mind that an array is just an ordered list of objects. So storing in array in each index is not as foreign as a concept as it may seem.
A = [] #an empty array
A[0] = [1, 2]
A[1] = 1
A # => [[1,2], 1]
If you want to initialize an array with a default value as an array try
A = Array.new(2) {Array.new(2){0}} #This creates an array of size 2 with default values of arrays of size 2 with 0 in each entry.
A[0][1] # returns 0
A[0] # returns [0, 0]
A #returns [[0,0], [0,0]]
|
unknown
| |
d1483
|
train
|
You want a function that takes a PipelineConfiguration and returns another function that takes an RDDLabeledPoint and returns an RDDLabeledPoint.
*
*What is the domain? PipelineConfiguration.
*What is the return type? A "function that takes RDDLP and returns RDDLP", that is: (RDDLabeledPoint => RDDLabeledPoint).
All together:
type FTDataReductionProcess =
(PipelineConfiguration => (RDDLabeledPoint => RDDLabeledPoint))
Since => is right-associative, you can also write:
type FTDataReductionProcess =
PipelineConfiguration => RDDLabeledPoint => RDDLabeledPoint
By the way: the same works for function literals too, which gives a nice concise syntax. Here is a shorter example to demonstrate the point:
scala> type Foo = Int => Int => Double // curried function type
defined type alias Foo
scala> val f: Foo = x => y => x.toDouble / y // function literal
f: Foo = $$Lambda$1065/1442768482@54089484
scala> f(5)(7) // applying curried function to two parameters
res0: Double = 0.7142857142857143
|
unknown
| |
d1485
|
train
|
I dont know if there is any direct system call that gives you memory details, but if you are on linux you can read and parse /proc/(pid of your process)/status file to get the needed memory usage counts
|
unknown
| |
d1487
|
train
|
You can use two Grid or GroupBox (or other container type) controls and put appropriate set of controls in each of them. This way you can just visibility of panels to hide the whole set of controls instead of hiding each control directly.
It may sometimes be appropriate to create a user control for each set of controls. However, this can depend on a specific case.
|
unknown
| |
d1489
|
train
|
The answer is in this snippet:
var aData = request.responseXML...
You're expecting XML. An & by itself is not legal XML. You need to output your result like this:
SUPPORT ASSY-FUEL TANK MOUNTING, R&R (LH) (L-ENG)
A: It's very difficult to tell without seeing your output script, but the first thing to try is to mask the ampersand: &
The neater way, though, would be to add CDATA to your XML output:
<data><![CDATA[SUPPORT ASSY-FUEL TANK MOUNTING, R&R (LH) (L-ENG)]]></data>
your XML parser on client side should understand it no problem.
A: You escape the ampersand by using the HTML eqv. &
A: If you are unable to alter the XML output from the server (it's not your app or some other issue), a "hack" fix would be:
function htmlizeAmps(s){
return s.replace(/\x26/g,"&"); //globalreplace "&" (hex 26) with "&"
}
document.getElementById('tempLabourLineDescription').value = htmlizeAmps(sDescription);
|
unknown
| |
d1493
|
train
|
How about:
<?php
$alphas = range('a', 'z');
$alphacount = count($alphas);
$a = 0;
for ($i=0;$i<$alphacount;$i++) {
$first = $alphas[$a];
$second = $alphas[$i];
if ($i >= $alphacount && $a < $alphaminus ) {
$i = 0;
$a ++;
}
echo "$first$second<br>";
}
So you don't have to to -1 since you don't like it! :)
And how about:
$alphas = range('a', 'z');
for ($i = 0; $i < count($alphas); $i++) {
for ($a = 0; $a < count($alphas); $a++) {
echo "{$alphas[$i]}{$alphas[$a]}\n";
}
}
Or forget about arrays! This is more fun :)
array_walk($alphas, function ($a) use ($alphas) {
array_walk($alphas, function ($b) use ($a) {
print "$a$b\n";
});
});
A: The problem is that you reset $i to 0 in the loop; then on encountering the end of the loop $i is incremented, so the next run in the loop will be with $i = 1 instead of $i = 0.
That is, the next subrange of letters starts with (letter)b instead of (letter)a. (See your output: the next line after az is bb rather than ba.)
Solution: reset $i to -1 in the loop, then at the end it will run with the value 0 again.
A: You have 26 characters, but arrays in PHP are indexed from 0. So, indexes are 0, 1, ... 25.
A: count is 1-based and arrays created by range() are 0-based.
It means that:
$alphas[0] == a
$alphas[25] == z
$count($alphas) = 26; // there are 26 elements. First element is $alphas[0]
A: Why does it have to be so complicated? You could simply do
foreach ($alphas as $alpha)
{
foreach($alphas as $alpha2)
{
echo $alpha.$alpha2."<br>";
}
}
Note: It is mostly not a good idea to manipulate the loop counter variable inside the body of that very loop. You set $i to 0 on a certain condition. That could give you unexpected results, hence the reason why you have to navigate around it.
|
unknown
| |
d1495
|
train
|
See http://blogs.msdn.com/b/laxmi/archive/2008/04/15/sql-server-compact-database-file-security.aspx and http://blogs.msdn.com/b/sqlservercompact/archive/2010/07/07/introducing-sql-server-compact-4-0-the-next-gen-embedded-database-from-microsoft.aspx
|
unknown
| |
d1497
|
train
|
Is this meant to get you your UIApplication singleton? (i'm guessing MyAppalloc is a typo and should be MyApp alloc)
MyApp *myApp2 = [[[MyApp alloc] init] autorelease];
if so then you should be doing it like this:
MyApp *myApp2 = (MyApp*)[UIApplication sharedApplication];
If this is not the case you need to make it clearer what MyApp is (your app delegate?)
A: I guess, your application running in singleton instance, if this is something kind of NSView or a particular control you want to refresh, you could call its particular refresh method like,
[NSTableView reload];
[NSTextField setString];
etc...
|
unknown
| |
d1501
|
train
|
Try this:
notStrong = True
while notStrong:
digits = False
upcase = False
lowcase = False
alnum = False
password = input("Enter a password between 6 and 12 characters: ")
while len(password) < 6 or len(password)>12:
if len(password)<6:
print("your password is too short")
elif len(password) > 12:
print("your password is too long")
password = input("Please enter a password between 6 and 12 characters.")
for i in range(0,len(password)):
if password[i].isupper():
upcase = True
if password[i].islower():
lowcase = True
if password[i].isalnum():
alnum = True
if password[i].isdigit():
digits = True
if digits and alnum and lowcase and upcase and digits:
print("Your password is strong")
notStrong = False
elif (digits and alnum) or (digits and lowcase) or (digits and upcase) or (alnum and lowcase) or (alnum and upcase) or (lowcase and upcase):
print("Your password is medium strength")
else:
print("Your password is weak")
A: You don't have to make use of a loop, you can simply call the function within itself if the password is weak or medium, like so:
def Password():
digits = False
upcase = False
lowcase = False
alnum = False
password = input("Enter a password between 6 and 12 characters: ")
while len(password) < 6 or len(password)>12:
if len(password)<6:
print("your password is too short")
elif len(password) > 12:
print("your password is too long")
password = input("Please enter a password between 6 and 12 characters.")
for i in range(0,len(password)):
if password[i].isupper():
upcase = True
if password[i].islower():
lowcase = True
if password[i].isalnum():
alnum = True
if password[i].isdigit():
digits = True
if digits and alnum and lowcase and upcase and digits:
print("Your password is strong")
elif (digits and alnum) or (digits and lowcase) or (digits and upcase) or (alnum and lowcase) or (alnum and upcase) or (lowcase and upcase):
print("Your password is medium strength")
Password()
else:
print("Your password is weak")
Password()
Password()
This is a very useful technique and can be used to simplify many problems without the need for a loop.
Additionally, you can put in another check so that you can adjust the required strength of the password as needed in the future - this is not as easy to do (or rather, cannot be done as elegantly) if you use a while loop.
EDIT - To add to the advice given in the comments:
How does a "stack overflow" occur and how do you prevent it?
Not deleting the answer because I now realize that it serves as an example of how one shouldn't approach this question, and this may dissuade people from making the same mistake that I did.
|
unknown
| |
d1509
|
train
|
You are differentiating the self and other user by some uniqueness right, say for example uniqueness is user email or user id.
Solution 1:
On making socket connection, send the user id/email also and you can store that as part of socket object itself. So that when ever player1 did some move, on emit send the id also along with whatever data you are sending.
Solution 2:
When player1 did some move, you will send data to server, while sending the data, send the user id/email also. And in server again emit along with user id.
In client you can check - if id is self, then update at bottom. If id is not self then update the top. Note: If you have multiple opponent player, still you can handle with this.
EXAMPLE:
In client:
<script>
var socket = io();
var selfId;
socket.on('playerinfo', function(data){
selfId = data.name;
playerInfo = data.players;
$('.top').html(playerInfo);
$('.bottom')html(selfId);
});
socket.on('move', function(data){
if(data.uid == selfId)
{
$('.top').html(data.card);
}
else
{
$('.bottom')html(data.card);
}
});
socket.emit('move', {card:A, uid:56836480193347838764});
</script>
In server:
var players = [];
io.on('connection', function(socket){
//player is given 'P' with random number when connection is made to represent username
socket.name = "P" + Math.floor(Math.random() * (20000));
// Here may be you can assign the position also where the user is sitting along with the user name.
//players will be an array of object which holds username and their position
//So that in client you can decide where the user will sit.
players.push(socket.name);
io.emit('playerinfo', {'players':players, 'name':socket.name);
socket.on('move', function (data) {
socket.emit('move', data);
});
}
A: Take a look into this 7 part tutorial. I think it gives you a good picture about what needs to be done in a typical multiplayer game by Socketio:
http://www.tamas.io/online-card-game-with-node-js-and-socket-io-episode-1/
http://www.tamas.io/online-card-game-with-node-js-and-socket-io-episode-2/
http://www.tamas.io/online-card-game-with-node-js-and-socket-io-episode-3/
http://www.tamas.io/online-card-game-with-node-js-and-socket-io-episode-4/
http://www.tamas.io/online-card-game-with-node-js-and-socket-io-episode-5/
http://www.tamas.io/online-card-game-with-node-js-and-socket-io-episode-6/
http://www.tamas.io/online-card-game-with-node-js-and-socket-io-episode-7
A: You need to give each client some kind of id, and tell each client when they connect, what their id is. When a client connects, generate a random unique id for them, and then send this back to them so they know their id. When you send back the data of other players, the client can figure out which data is theirs based on the id and display it in the right area.
|
unknown
| |
d1513
|
train
|
There are many challenges mixing different compilers, like:
*
*Name mangling (the way symbols are exported), especially when using C++
*Different compilers use different standard libraries, which may cause serious problems. Imagine for example memory allocated with GCC/MinGW malloc() being released with MSVC free(), which will not work.
With static libraries it is especially hard (e.g. malloc() can be linked to the wrong standard library).
With shared libraries there may be possibilities to solve these issues and get it to work, at least when sticking to C. For C++ it may be a lot more challenging.
|
unknown
| |
d1515
|
train
|
Disable ligatures to make them show properly. Worked for me.
-moz-font-feature-settings: "liga" off;
https://developer.mozilla.org/en/CSS/font-feature-settings
|
unknown
| |
d1519
|
train
|
my $result = qx(some-shell-command @{[ get_value() ]});
# or dereferencing single scalar value
# (last one from get_value if it returns more than one)
my $result = qx(some-shell-command ${ \get_value() });
but I would rather use your first option.
Explanation: perl arrays interpolate inside "", qx(), etc.
Above is array reference [] holding result of function, being dereferenced by @{}, and interpolated inside qx().
A: Backticks and qx are equivalent to the builtin readpipe function, so you could explicitly use that:
$result = readpipe("some-shell-command " . get_value());
|
unknown
| |
d1521
|
train
|
Currently this is working for me -- making a socket rather than a shared memory connection.
>jdb –sourcepath .\src -connect com.sun.jdi.SocketAttach:hostname=localhost,port=8700
Beforehand you need to do some setup -- for example, see this set of useful details on setting up a non-eclipse debugger. It includes a good tip for setting your initial breakpoint -- create or edit a jdb.ini file in your home directory, with content like:
stop at com.mine.of.package.some.AClassIn:14
and they'll get loaded and deferred until connection.
edit: forgot to reference Herong Yang's page.
A: Try quitting Android Studio.
I had a similar problem on the Mac due to the ADB daemon already running. Once you quit any running daemons, you should see output similar to the following:
$ adb -d jdwp
28462
1939
^C
$ adb -d forward tcp:7777 jdwp:1939
$ jdb -attach localhost:7777 -sourcepath ./src
Set uncaught java.lang.Throwable
Set deferred uncaught java.lang.Throwable
Initializing jdb ...
>
See my other answer to a similar question for more details and how to start/stop the daemon.
A: Answer #1: Map localhost in your hosts file, as I linked to earlier. Just to be sure.
Answer #2: If you're using shared memory, bit-size could easily become an issue. Make sure you're using the same word width everywhere.
A: In order to debug application follow this steps:
Open the application on the device.
Find the PID with jdwp (make sure that 'android:debuggable' is set to true in the manifest):
adb jdwp
Start JVM with the following parameters:
java -agentlib:jdwp=transport=dt_shmem,server=y,address=<port> <class>
Expected output for this command:
Listening for transport dt_shmem at address: <port>
Use jdb to attach the application:
jdb -attach <port>
If jdb successful attached we will see the jdb cli.
Example:
> adb jdwp
12300
> java -agentlib:jdwp=transport=dt_shmem,server=y,address=8700 com.app.app
Listening for transport dt_shmem at address: 8700
> jdb -attach 8700
main[1]
|
unknown
| |
d1525
|
train
|
import matplotlib.pyplot as plt
import pandas as pd
data = {'2013': {1:25,2:81,3:15}, '2014': {1:28, 2:65, 3:75}, '2015': {1:78,2:91,3:86 }}
df = pd.DataFrame(data)
df.plot(kind='bar')
plt.show()
I like pandas because it takes your data without having to do any manipulation to it and plot it.
A: You can access the keys of a dictionary via dict.keys() and the values via dict.values()
If you wanted to plot, say, the data for 2013 you can do:
import matplotlib.pyplot as pl
x_13 = data['2013'].keys()
y_13 = data['2013'].values()
pl.bar(x_13, y_13, label = '2013')
pl.legend()
That should do the trick. More elegantly, do can simply do:
year = '2013'
pl.bar(data[year].keys(), data[year].values(), label=year)
which woud allow you to loop it:
for year in ['2013','2014','2015']:
pl.bar(data[year].keys(), data[year].values(), label=year)
A: You can do this a few ways.
The Functional way using bar():
data = {'2013': {1: 25, 2: 81, 3: 15}, '2014': {1: 28, 2: 65, 3: 75}, '2015': {1: 78, 2: 91, 3: 86}}
df = pd.DataFrame(data)
X_axis = np.arange(len(df))
plt.bar(X_axis - 0.1,height=df["2013"], label='2013',width=.1)
plt.bar(X_axis, height=df["2014"], label='2014',width=.1)
plt.bar(X_axis + 0.1, height=df["2015"], label='2015',width=.1)
plt.legend()
plt.show()
More info here.
The Object-Oriented way using figure():
data = {'2013': {1: 25, 2: 81, 3: 15}, '2014': {1: 28, 2: 65, 3: 75}, '2015': {1: 78, 2: 91, 3: 86}}
df = pd.DataFrame(data)
fig= plt.figure()
axes = fig.add_axes([.1,.1,.8,.8])
X_axis = np.arange(len(df))
axes.bar(X_axis -.25,df["2013"], color ='b', width=.25)
axes.bar(X_axis,df["2014"], color ='r', width=.25)
axes.bar(X_axis +.25,df["2015"], color ='g', width=.25)
|
unknown
| |
d1527
|
train
|
boost::mpl::_1 and boost::mpl::_2 are placeholders; they can be used as template parameters to differ the binding to an actual argument to a later time. With this, you can do partial application (transforming a metafunction having an n-arity to a function having a (n-m)-arity), lambda expressions (creating a metafunction on-the-fly where it is needed), etc.
An expression containing at least a placeholder is a placeholder expression, which can be invoked like any other metafunction, with some arguments that will replace the placeholders.
In your example, assuming the following typedef
typedef boost::flyweights::hashed_factory_class<
boost::mpl::_1,
boost::mpl::_2,
boost::hash<boost::mpl::_2>,
std::equal_to<boost::mpl::_2>,
std::allocator<boost::mpl::_1>
> hashed_factory;
we can assume that at some other point in the code, the hashed_factory will be invoked with some parameter:
typedef typename
boost::mpl::apply<
hashed_factory,
X,
Y
>::type result; // invoke hashed_factory with X and Y
// _1 is "replaced" by X, _2 by Y
I did not look in Flyweight code, but we can suppose that _1 will be bound to the value type of the flyweight, and _2 to the key type (since it is used for hashing and testing equality). In this case, I think both will be std::string since no key type is specified.
I'm not sure my explanation about MPL's placeholders is quite clear, feel free to read the excellent MPL tutorial that explains very well metafunctions, lambda expressions and other template metaprogramming features.
|
unknown
| |
d1531
|
train
|
There seems to be a mathematical error. If you want your last animate to move the element to the start position, change it to:
.animate({
'top': '-=200',
'left': '-=300'
}
But if you want it to move to start after you current last animation then add the following animate after that:
.animate({
'top': '-=250'
}
|
unknown
| |
d1533
|
train
|
person['age'] = age
The 'age' inside the brackets is the key, the value 'age' is where your value is assigned.
The person dictionnary becomes:
{'first': first_name, 'last': last_name,'age': age}
A: Here, person[age] = age
works only when age is given as argument when calling this function.
person is dictionary, age in person[age] is key, and the age which is at right side of assignment operator(=) is value passed as argument in function.
for e.g :
for the given code below in last line i have given age as argument.
def build_person(first_name, last_name, age=None):
person = {'first': first_name, 'last': last_name}
if age:
person['age'] = age
print(person)
return person
build_person("yash","verma",9)
Output for above code is :
{'first': 'yash', 'last': 'verma', 'age': 9}
now,
if i don't give age as a argument then,
def build_person(first_name, last_name, age=None):
person = {'first': first_name, 'last': last_name}
if age:
person['age'] = age
print(person)
return person
build_person("yash","verma")
output will be:
{'first': 'yash', 'last': 'verma'}
|
unknown
| |
d1539
|
train
|
Use -l option to list files that match then xargs command to apply grep on those files.
grep -l -rsI "some_string" *.c | xargs grep "second_string"
A: grep -rsIl "some_string" *.c | xargs grep -sI "second_string"
|
unknown
| |
d1541
|
train
|
WebSharper can run as an ASP.NET module, so the easiest way to start your app is to run xsp4 (mono's self-hosted ASP.NET server) in the project folder. That's good as a quick server for testing; for production you should rather configure a server like Apache or nginx.
Another solution would be to use the websharpersuave template instead, which does generate a self-serving executable.
|
unknown
| |
d1545
|
train
|
I did not fully understand your question but you can make request from your frontend React page and based on the result you can render whatever you want.
|
unknown
| |
d1549
|
train
|
Codesys v2.3 only has OPC-DA (Availability depends on PLC model and manufacturer. Even though OPC-UA may not always be available on a PLC with Codesys v3.5, check your vendor's documentation to check if is a optional).
There are Gateways (hardware and software) that allow "conversion" between OPC-DA and OPC-UA or between other protocols for OPC-UA, for example, if your PLC has Modbus-TCP/RTU. It is also possible to use another PLC, as Camile G. commented, but possibly the cost and development should be more complicated, depending on the case it may be worth migrating the entire system to a single PLC with Codesys v3.5 and OPC-UA.
|
unknown
| |
d1551
|
train
|
I suppose that exceljs hasn't implemented at least one feature used in input.xlsx.
Similar situation I had with images some times ago and fixed by PR: https://github.com/exceljs/exceljs/pull/702
Could I ask you to create an issue on GH?
If you want to find what exactly went wrong:
*
*unzip input.xlsx
*unzip output.xlsx
*check diff between.
You can also upload input.xlsx here, it should help to find a bug reason.
I think also, check any other version of exceljs may be helpful.
|
unknown
| |
d1557
|
train
|
Users are stored in USER_ table:
select * from USER_;
Groups are stored in GROUP_ table:
select * from GROUP_;
Roles are stored in ROLE_ table:
select * from ROLE_;
Simple view of users and their groups:
select USER_.USERID, USER_.SCREENNAME, USER_.EMAILADDRESS, GROUP_.NAME
from USER_, USERS_GROUPS, GROUP_
where USER_.USERID = USERS_GROUPS.USERID and USERS_GROUPS.GROUPID = GROUP_.GROUPID
order by USER_.SCREENNAME;
Simple view of users and their roles:
select USER_.USERID, USER_.SCREENNAME, USER_.EMAILADDRESS, ROLE_.NAME
from USER_, USERS_ROLES, ROLE_
where USER_.USERID = USERS_ROLES.USERID and USERS_ROLES.ROLEID = ROLE_.ROLEID
order by USER_.SCREENNAME;
Custom fields can be added for uses, groups and roles alike.
|
unknown
| |
d1559
|
train
|
Y can't you validate using .
$validemail = filter_input(INPUT_POST, 'email', FILTER_VALIDATE_EMAIL);
if ($validemail) {
$headers .= "Reply-to: $validemail\r\n";
}else
{
//redirect.
}
considering your actual question .
<?php
$emails = explode(';',$emailList);
if(count ($emails)<3)
{
if(filter_var_array($emails, FILTER_VALIDATE_EMAIL))
{
mail();
}
else
{
//die
}
}
?>
A: You could check the number of email in the string like this:
if(count(explode(';',$emailList))<3)
{
// send email
}
else
{
// Oh no, jumbo!
}
This code will explode your email string based on the ; characters into an array while at the same time use a count function on the array and execute one of two scenarios based on the number.
A: This should work to only pick maximum two emails (the first two):
$emailString = "email@examplecom;[email protected];[email protected]";
$emails = explode(";", $emailString);
$emails = array_slice($emails, 0, 2);
$emailString = implode(";", $emails);
var_dump($emailString);
Outputs:
string(31) "email@examplecom;[email protected]"
|
unknown
| |
d1561
|
train
|
Approach 1 :
4 different servers, mean four different URLs ok.
suppose ur application url is
http(s)://ipaddress:port/application
now it is possible that all four server instances are on same machine. in that case "ipaddress" would be same. but port would be different. if machines are different in that case both ipaddress and port would be different.
now inside your code you can get absoultepath. absolutepath would consist of the complete URL of the file absolutepath is requested on. like
http://ipaddress:port/application/abc/def/x.java.
Now from this you can extract the port/ipaddress and write your logic.
Approach2:
have a property file inside your application that contains the server its being deployed on. ( right now I cant think of something to set this automatically, but while deployment you can set the server name in this property file)
later on when you need. you can read the property file and you will know which version of application you are running on.
personally I`d prefer approach2 (and should be prefered in case we can think of initializing the properties file automatically)
Hope it helps :-)
|
unknown
| |
d1563
|
train
|
Instead of / additional to naming your labels by specific names you can later match them on, I'd think a Map of JLabels with Strings or Integers as keys might be a better approach:
Map<String,JLabel> labelMap = new HashMap<String,JLabel>();
labelMap.put("1", OP_1);
labelMap.put("2", OP_2);
This will allow later access of "The label for key 2" as well as "list me all that labels and find the one with text 2" as well
A: Here I created an array of JLabels and a method, updateNextLabel(String), which will update the next JLabel with whatever you enter for str.
public class Example {
static int count = 0; //Create a count
static JLabel[] array = new JLabel[3]; //Create an array to hold all three JLabels
public static void main(String[] args) {
//Set the default text for each JLabel
array[0] = new JLabel("This is OP1");
array[1] = new JLabel("This is OP2");
array[2] = new JLabel("This is OP3");
//Here is an example if you wanted to use a for-loop to update the JLabels
for (int x = 0; x < array.length; x++) {
updateNextLabel("This is the new text for OP" + (count + 1));
System.out.println(array[x].getText());
}
}
public static void updateNextLabel(String str) {
array[count].setText(str);
count++;
}
}
|
unknown
| |
d1573
|
train
|
Sifting through CSS for a few minutes and I found a solution, I have made a list of corrections.
Here is the CSS code on pastebin, it does all the fixes I have mentioned below.
Manual Fix
Find #middlewrapper #content .contentbox, disable float:left.
#middlewrapper #content .contentbox {
width: 650px;
padding: 15px;
/* float: left; */
background-color: white;
}
Find #middlewrapper #content .contentbox_shadow, disable float:left.
#middlewrapper #content .contentbox_shadow {
width: 690px;
height: 20px;
/* float: left; */
background-image: url('http://xn--nstvedhandel-6cb.dk/naestved/public/css/../images/content_skygge.png');
background-repeat: no-repeat;
}
Find #middlewrapper #content, disable float:left.
#middlewrapper #content {
width: 690px;
/* float: left; */
}
Finally, find #middlewrapper #content .contentbox, after it's definition, place #content.
#content {
width: 680px;
float: left;
}
Do not delete or alter the definition of #middlewrapper #content found a few lines later.
|
unknown
| |
d1577
|
train
|
onRow seems expect a "factory" function which returns the actual event handlers.
(defn on-row-factory [record row-index]
#js {:onClick (fn [event] ...)
:onDoubleClick (fn [event] ...)})
;; reagent
[:> Table {:onRow on-row-factory} ...]
You don't need to use the defn and could just inline a fn instead.
|
unknown
| |
d1581
|
train
|
You must first extract the number from the string. If the text part ("R") is always separated from the number part by a "|", you can easily separated the two with Split:
Dim Alltext_line = "R|1"
Dim parts = Alltext_line.Split("|"c)
parts is a string array. If this results in two parts, the string has the expected shape and we can try to convert the second part to a number, increase it and then re-create the string using the increased number
Dim n As Integer
If parts.Length = 2 AndAlso Integer.TryParse(parts(1), n) Then
Alltext_line = parts(0) & "|" & (n + 1)
End If
Note that the c in "|"c denotes a Char constant in VB.
A: An alternate solution that takes advantage of the String type defined as an Array of Chars.
I'm using string.Concat() to patch together the resulting IEnumerable(Of Char) and CInt() to convert the string to an Integer and sum 1 to its value.
Raw_data = "R|151"
Dim Result As String = Raw_data.Substring(0, 2) & (CInt(String.Concat(Raw_data.Skip(2))) + 1).ToString
This, of course, supposes that the source string is directly convertible to an Integer type.
If a value check is instead required, you can use Integer.TryParse() to perform the validation:
Dim ValuePart As String = Raw_data.Substring(2)
Dim Value As Integer = 0
If Integer.TryParse(ValuePart, Value) Then
Raw_data = Raw_data.Substring(0, 2) & (Value + 1).ToString
End If
If the left part can be variable (in size or content), the answer provided by Olivier Jacot-Descombes is covering this scenario already.
A: Sub IncrVal()
Dim s = "R|1"
For x% = 1 To 10
s = Regex.Replace(s, "[0-9]+", Function(m) Integer.Parse(m.Value) + 1)
Next
End Sub
|
unknown
| |
d1597
|
train
|
just update your vue-loader. in recent weeks, it updates fast! from v16 back to v15.
|
unknown
| |
d1599
|
train
|
On second thought, let me expand on my comment:
You can't set Dockerfile environment variables to the result of commands in a RUN statement. Variables set in a RUN statement are ephemeral; they exist only while the RUN statement is active
If you don't have access to the host environment (to pass arguments to the docker build command), you're not going to be able to do exactly what you want.
However, you can add an ENTRYPOINT script to your container that will set up dynamic environment variables before the main process runs. That is, if you have in your Dockerfile:
ENTRYPOINT ["/docker-entrypoint.sh"]
And in /docker-entrypoint.sh you have:
#!/bin/bash
branch=$(git branch | sed -n -e 's/^\* \(.*\)/\1/p' | awk -Frelease/ '/release/{print $2}') \
&& if [[ "$branch" != qa* ]]; then branch=$(git log -1 --pretty | grep release\/ | awk -Frelease/ '/release/{print $2}' | awk -F: '{print $1}'); fi)
export EXPORT_ENV="$branch"
exec "$@"
Then the EXPORT_ENV environment variable would be available in the environment of your CMD process.
|
unknown
| |
d1603
|
train
|
If you haven't figured out the issue yet, it's likely that you don't have write permissions to the directory the image is in.
|
unknown
| |
d1607
|
train
|
You should identify when the text is the final answer and reset the text before adding new one.
from kivy.app import App
from kivy.core.window import Window
from kivy.uix.widget import Widget
Window.size = (350, 450)
class MainWidget(Widget):
def __init__(self):
self.textIsResult = false
def clear(self):
self.ids.input.text=""
def back(self):
expression = self.ids.input.text
expression = expression[:1]
self.ids.input.text = expression
def pressed(self, button):
expression = self.ids.input.text
if self.textIsResult:
self.ids.input.text = f"{button}"
if "Fault" in expression:
expression = ""
self.textIsResult = false
if expression == "0":
self.ids.input.text = ""
self.ids.input.text = f"{button}"
else:
self.ids.input.text = f"{expression}{button}"
def answer(self):
expression = self.ids.input.text
try:
self.ids.input.text = str(eval(expression))
self.textIsResult = true
except:
self.ids.input.text = "Fault"
class TheLabApp(App):
pass
TheLabApp().run()
|
unknown
| |
d1609
|
train
|
You should rethrow the error/throw a new error to catch error again.
It's an example:
Promise.reject("throwed on demo 1")
.catch((e) => {
console.log("Catched", e)
})
.catch((e) => {
// unreached block
console.log("Can NOT recatch", e)
})
Promise.reject("throwed on demo 2")
.catch((e) => {
console.log("Catched", e)
throw e
})
.catch((e) => {
console.log("Recatched", e)
})
UPDATE: Another issue, you need to send a response even error/success. Otherwise, the request will not respond, the client will be pending forever.
_server.get(`/select`, (req, res) => {
return DefaultSQL.select(table).then((result) => {
res.send(result)
//return result
}).catch((err) => {
res.status(500).send(err)
console.log(err)
});
})
|
unknown
| |
d1611
|
train
|
Just open the csv file in append mode. This will solve your problem.
Use:
with open("pav.csv",'a',newline='') as wr:
A: You need to open the file in append mode so that it will write to the end of the file:
with open("C:\pavan\pav.csv",'a',newline='') as wr:
This will open the file in write mode, and append to the end of the file if it exists.
|
unknown
| |
d1613
|
train
|
Sanjeev got it, there is a parameter you can add to specify version:
FacebookClient fbClient = new FacebookClient();
fbClient.Version = "v2.2";
fbClient.Post("me/feed", new
{
message = string.Format("Hello version 2.2! - Try #2"),
access_token = "youraccesstokenhere"
});
If you are not specifying a version, it'll default to the oldest supported version:
https://developers.facebook.com/docs/apps/versions#unversioned_calls
Facebook will warn you that you are nearing end of support for that version.
Don't take any chances that the auto-upgrade to newer version will work, better to develop, test and deliver with the latest version.
|
unknown
| |
d1615
|
train
|
You can use Import/Export option for this task.
*
*Right click on your table
*Select "Import/Export" option & Click
*Provide proper option
*Click Ok button
A: You should try this it must work
COPY kordinater.test(id,date,time,latitude,longitude)
FROM 'C:\tmp\yourfile.csv' DELIMITER ',' CSV HEADER;
Your csv header must be separated by comma NOT WITH semi-colon or try to change id column type to bigint
to know more
A: I believe the quickest way to overcome this issue is to create an intermediary temporary table, so that you can import your data and cast the coordinates as you please.
Create a similar temporary table with the problematic columns as text:
CREATE TEMPORARY TABLE tmp
(
id integer,
date date,
time time without time zone,
latitude text,
longitude text
);
And import your file using COPY:
COPY tmp FROM '/path/to/file.csv' DELIMITER ';' CSV HEADER;
Once you have your data in the tmp table, you can cast the coordinates and insert them into the test table with this command:
INSERT INTO test (id, date, time, latitude, longitude)
SELECT id, date, time, replace(latitude,',','.')::numeric, replace(longitude,',','.')::numeric from tmp;
One more thing:
Since you're working with geographic coordinates, I sincerely recommend you to take a look at PostGIS. It is quite easy to install and makes your life much easier when you start your first calculations with geospatial data.
|
unknown
| |
d1617
|
train
|
Unfortunatelly, FTP won't work.
DownloadManager supports HTTP. And HTTPS is supported since ICS.
If you try downloading from FTP you receive one of these exceptions:
java.lang.IllegalArgumentException: Can only download HTTP URIs
or
java.lang.IllegalArgumentException: Can only download HTTP/HTTPS URIs
|
unknown
| |
d1619
|
train
|
Managed to solve this .
had to use a templating language (jinja2) instead of ajax, to get my form schema into my html document.. so that json form ( a jquery form builder) couple execute on a full html doc on the page loading .
Silly !
Hope this helps .
|
unknown
| |
d1621
|
train
|
It depends on the app you are building.
Database connection pools are used because of following reasons:
*
*Acquiring DB connection is costly operation.
*You have limited resources and hence at a time can have only finite number of DB connections open.
*Not all the user requests being processed by your server are doing DB operations, so you can reuse DB connections between requests.
Since acquiring new connections are costly, you should keep min_size to be non-zero. Based on what is load during light usage of your app, you can use a good guess here. Typically, 5 is specified in most examples.
acquire_increment depends on how the fast the number of users that are using your app increases. So, imagine if you ask 1 new connection every time you needed an extra connection, your app may perform badly. So, anticipating user burst, you may want to increment in larger chunks, say 5 or 10 or more.
Typically, max. number of DB connections you have can be lesser than the number of concurrent users that are using your app. However, if you are application is database-heavy, then, you may have to configure max_size to match the number of concurrent users you have.
There will be a point when you may not be able to deal with so many users even after configuring max_size to be very high. That's when you will have to think about re-designing your app to avoid load on database. This is typically done by offloading read operations to alternate DB instance that serves only read operations. Also, one employs caching of such data which do not change often but are read very often.
Similar justification can be applied for other fields as well
|
unknown
| |
d1623
|
train
|
We could use get to get the value of the object. If there are multiple objects, use mget. For example, here I am assigning 'debt_a' with the value of 'debt_30_06_2010'
assign('debt_a', get(paste0('debt_', date[1])))
debt_a
#[1] 1 2 3 4 5
mget returns a list. So if we are assigning 'debt_a' to multiple objects,
assign('debt_a', mget(paste0('debt_', date)))
debt_a
#$debt_30_06_2010
#[1] 1 2 3 4 5
#$debt_30_06_2011
#[1] 6 7 8 9 10
data
debt_30_06_2010 <- 1:5
debt_30_06_2011 <- 6:10
date <- c('30_06_2010', '30_06_2011')
A: I'm not sure if I understood your question correctly, but I suspect that your objects are names of functions, and that you want to construct these names as characters to use the functions. If this is the case, this example might help:
myfun <- function(x){sin(x)**2}
mychar <- paste0("my", "fun")
eval(call(mychar, x = pi / 4))
#[1] 0.5
#> identical(eval(call(mychar, x = pi / 4)), myfun(pi / 4))
#[1] TRUE
|
unknown
| |
d1631
|
train
|
Sometimes SWFRender is stuck at very heavy files, especially when producing 300dpi+ images. In this case Gnash may help:
gnash -s<scale-image-factor> --screenshot last --screenshot-file output.png -1 -r1 input.swf
here we dump a last frame of a movie to file output.png disabling sound processing and exiting after the frame is rendered. Also we can specify the scale factor here or use
-j width -k height
to specify the exact size of resulting image.
A: You could for example build an AIR app that loads each SWF, takes the screenshot and writes it to a file.
The thing is you'll need to kick off something to do the render and, as far as i know, you can't do that without the player or some of its Open Source implementation.
I think your best bet is going AIR, the SDK is free and cross-platform. If you are used to python, the AS3 necessary should be easy enough to pick up.
HTH,
J
A: I'm sorry to answer my own question, but I found an undocumented feature of swfrender (part of the swftools) by browsing through the sources.
swfrender path/to/my.swf -X<width of output> -Y<height of output>
-o<filename of output png>
As you might have guessed the X option lets you determine the width (in pixels) of the output and Y does the same for the height. If you just set one parameter, then the other one is chosen in relation to the original height-width-ratio (pretty useful)
That does the trick for me but as Zarate offered a solution that might be even better (I'm thinking of swf to PDF conversion) he deserves the credits.
Cheers
|
unknown
| |
d1635
|
train
|
I guess in foo you assign ptr some value (otherwise the *& has no value). You cannot pass nullptr and you have to declare a pointer like you shown in the wrapper because nullptr is an rvalue. An rvalue is an expression, or an "unnamed object" and you cannot take the address of it. There is more information here Why don't rvalues have an address?.
|
unknown
| |
d1639
|
train
|
In your onCellMouseOver you can get the row index (e.rowIndex). From that you can get the item from the grid assuming you are using an ItemFileReadStore (I have not tried it with a ItemFileWriteStore)
function cellMouseOver (e)
{
var rowIndex = e.rowIndex;
var item = grid.getItem(e.rowIndex);
}
|
unknown
| |
d1641
|
train
|
I found:
*
*hibernate.cfg.xml is not neede
*only persistence.xml and tomee.xml are required
I give you my example:
<persistence version="1.0"
xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd">
<persistence-unit name="docTracingPU" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>java:comp/env/jdbc/docTracing</jta-data-source>
<non-jta-data-source>java:comp/env/jdbc/docTracing</non-jta-data-source>
<class>com.emaborsa.doctracing.core.persistentobject.UtentePO</class>
<properties>
<property name="hibernate.hbm2ddl.auto" value="validate" />
<property name="hibernate.transaction.flush_before_completion" value="true"/>
<property name="hibernate.transaction.auto_close_session" value="true"/>
<property name="hibernate.transaction.manager_lookup_class" value="org.apache.openejb.hibernate.TransactionManagerLookup" />
<property name="hibernate.transaction.flush_before_completion" value="true"/>
<property name="hibernate.transaction.auto_close_session" value="true"/>
<!-- Print SQL to stdout. -->
<property name="hibernate.show_sql" value="true" />
<property name="hibernate.format_sql" value="true" />
</properties>
</persistence-unit>
<?xml version="1.0" encoding="UTF-8"?>
<tomee>
<Resource id="docTracingPU" type="DataSource">
JdbcDriver org.postgresql.Driver
JdbcUrl jdbc:postgresql://127.0.0.1:5432/myDb
UserName ****
Password ****
JtaManaged false
TestWhileIdle true
InitialSize 5
</Resource>
<Resource id="docTracingPU" type="DataSource">
JdbcDriver org.postgresql.Driver
JdbcUrl jdbc:postgresql://127.0.0.1:5432/myDb
UserName *****
Password *****
JtaManaged true
TestWhileIdle true
InitialSize 5
</Resource>
</tomee>
|
unknown
| |
d1645
|
train
|
No, there's no way to do it, and you should not think of it this way. The sender should perform the same no matter what number of slots are connected to a signal. That's the basic contract of the signal-slot mechanism: the sender is completely decoupled from, and unaware of, the receiver.
What you're trying to do is qualified dispatch: there are multiple receivers, and each receiver can process one or more message types. One way of implementing it is as follows:
*
*Emit (signal) a QEvent. This lets you maintain the signal-slot decoupling between the transmitter and the receiver(s).
*The event can then be consumed by a custom event dispatcher that knows which objects process events of given type.
*The objects are sent the event in the usual fashion, and receive it in their event() method.
The implementation below allows the receiver objects to live in other threads. That's why it needs to be able to clone events.
class <QCoreApplication>
class <QEvent>
class ClonableEvent : public QEvent {
Q_DISABLE_COPY(ClonableEvent)
public:
ClonableEvent(int type) : QEvent(static_cast<QEvent::Type>(type)) {}
virtual ClonableEvent * clone() const { return new ClonableEvent(type()); }
}
Q_REGISTER_METATYPE(ClonableEvent*)
class Dispatcher : public QObject {
Q_OBJECT
QMap<int, QSet<QObject*>> m_handlers;
public:
Q_SLOT void dispatch(ClonableEvent * ev) {
auto it = m_handlers.find(ev->type());
if (it == m_handlers.end()) return;
for (auto object : *it) {
if (obj->thread() == QThread::currentThread())
QCoreApplication::sendEvent(obj, ev);
else
QCoreApplication::postEvent(obj, ev.clone());
}
}
void addMapping(QClonableEvent * ev, QObject * obj) {
addMapping(ev->type(), obj);
}
void addMapping(int type, QObject * obj) {
QSet<QObject*> & handlers = m_handlers[type];
auto it = handlers.find(obj);
if (it != handlers.end()) return;
handlers.insert(obj);
QObject::connect(obj, &QObject::destroyed, [this, type, obj]{
unregister(type, obj);
});
m_handlers[type].insert(obj);
}
void removeMapping(int type, QObject * obj) {
auto it = m_handlers.find(type);
if (it == m_handlers.end()) return;
it->remove(obj);
}
}
class EventDisplay : public QObject {
bool event(QEvent * ev) {
qDebug() << objectName() << "got event" << ev.type();
return QObject::event(ev);
}
public:
EventDisplay() {}
};
class EventSource : public QObject {
Q_OBJECT
public:
Q_SIGNAL void indication(ClonableEvent *);
}
#define NAMED(x) x; x.setObjectName(#x)
int main(int argc, char ** argv) {
QCoreApplication app(argc, argv);
ClonableEvent ev1(QEvent::User + 1);
ClonableEvent ev2(QEvent::User + 2);
EventDisplay NAMED(dp1);
EventDisplay NAMED(dp12);
EventDisplay NAMED(dp2);
Dispatcher d;
d.addMapping(ev1, dp1); // dp1 handles only ev1
d.addMapping(ev1, dp12); // dp12 handles both ev1 and ev2
d.addMapping(ev2, dp12);
d.addMapping(ev2, dp2); // dp2 handles only ev2
EventSource s;
QObject::connect(&s, &EventSource::indication, &d, &Dispatcher::dispatch);
emit s.indication(&ev1);
emit s.indication(&ev2);
return 0;
}
#include "main.moc"
A: If connection was in one thread, I think that you can throw an exception. But in this case you should be catch any exception during emit a signal:
try {
emit someSignal();
} catch(...) {
qDebug() << "catched";
}
But I think that it's bad idea. I'll would be use event dispatching for this.
|
unknown
| |
d1647
|
train
|
This is in conjunction with Seth McClaine's answer.
Echo your values using:
<?php echo $Name; ?> <?php echo $Bech; ?>
Instead of <?=$Name?> <?=$Bech?>
The use of short tags is not recommended for something like this.
Reformatted code:
<?php
$abfrage = "SELECT * FROM tester";
$ergebnis = mysql_query($abfrage);
$row = mysql_fetch_object($ergebnis);
$Name=$row->Name;
$Bech=$row->Beschreibung;
?>
<a href="" title="<?php echo $Name; ?> <?php echo $Bech; ?>">Test Link</a>
A: If you just want the first result remove the while...
<?php
$abfrage = "SELECT * FROM tester";
$ergebnis = mysql_query($abfrage);
$row = mysql_fetch_object($ergebnis);
$Name=$row->Name;
$Bech=$row->Beschreibung;
?>
<a href="" title="<?=$Name?> <?=$Bech?>">Test Link</a>
|
unknown
| |
d1649
|
train
|
I tried the code you posted, except that I added: require('Zend/Soap/AutoDiscover.php');. It worked.
A: Try adding docblocking to the hello function. the WSDL generator relies on it to generate proper WSDL file. http://framework.zend.com/manual/en/zend.soap.autodiscovery.html See the important notes in that link.
A: Yep, you are missing require('Zend/Soap/AutoDiscover.php'); that's all.
|
unknown
| |
d1651
|
train
|
I was able to have correct character spacing by adding a space between each character and makeing that space to have a size of 8pt (because my text has a size of 16).
It looks good in my report.
|
unknown
| |
d1657
|
train
|
You can use loops to create an array of rows and an array of columns beforehand and assign these to the RowDefinitions and ColumnDefinitions properties.
I should have thought you'd need to call RowDefinitions.Add() and ColumnDefinitions.Add() in a loop to do so, though.
A: No, this is not possible because the only way this would work is if you could assign a completely new value to the RowDefinitions property, which you can't:
public RowDefinitionCollection RowDefinitions { get; }
^^^^
The syntax as shown in your question is just a handy way of calling .Add on the object in that property, so there is no way for you to inline in that syntax do this. Your code is just "short" for this:
var temp = new Grid();
temp.RowSpacing = 12;
temp.ColumnSpacing = 12;
temp.VerticalOptions = LayoutOptions.FillAndExpand;
temp.RowDefinitions.Add(new RowDefinition { Height = new GridLength(1, GridUnitType.Star) });
temp.RowDefinitions.Add(new RowDefinition { Height = new GridLength(1, GridUnitType.Star) });
temp.RowDefinitions.Add(new RowDefinition { Height = new GridLength(1, GridUnitType.Star) });
... same for columns
Specifically, your code is not doing this:
temp.RowDefinitions = ...
^
You would probably want code like this:
var grid = new Grid()
{
RowSpacing = 12,
ColumnSpacing = 12,
VerticalOptions = LayoutOptions.FillAndExpand,
RowDefinitions = Enumerable.Range(0, 100).Select(_ =>
new RowDefinition { Height = new GridLength(1, GridUnitType.Star) }),
ColumnDefinitions = Enumerable.Range(.....
But you cannot do this as this would require that RowDefinitions and ColumnDefinitions was writable.
The closest thing is like this:
var temp = new Grid
{
RowSpacing = 12,
ColumnSpacing = 12,
VerticalOptions = LayoutOptions.FillAndExpand,
};
for (int index = 0; index < rowCount; index++)
temp.RowDefinitions.Add(new RowDefinition { Height = new GridLength(1, GridUnitType.Star) });
... same for columns
var grid = temp;
A: RowDefinitions is RowDefinitionCollection. RowDefinitionCollection is internal which you cannot create outside Grid.
|
unknown
| |
d1659
|
train
|
You might be knowing data types in JS.
If you pass circle or star as argument without quotes then the argument will be interpreted as object (which is not your intension).
As per you function definition it is expecting string, means you should pass string literal e.g. symbols('star') or you should have a variable containing string value e.g. var circle = 'circle';
symbols(circle);
|
unknown
| |
d1663
|
train
|
There are some easy to follow examples in this GitHub mirror of django-autocomplete.
A: some time ago I put together a small tutorial on this, you might find that useful... it's here
|
unknown
| |
d1667
|
train
|
The following command was failing with failed to compute cache key: not found:
docker build -t tag-name:v1.5.1 - <Dockerfile
Upon changing the command to the following it got fixed:
docker build -t tag-name:v1.5.1 -f Dockerfile .
A: In my case I found that docker build is case sensitive in directory name, so I was writing /bin/release/net5.0/publish in the COPY instruction and failed with the same error, I've just changed to /bin/Release/net5.0/publish and it worked
A: Error : failed to compute cache key: "src" not found: not found
in my case , folder/file excluded in .dockerignore
*
*after resolving file from dockerignore able to create image.
A: In my case, I had something like this:
FROM mcr.microsoft.com/dotnet/aspnet:5.0
COPY bin/Release/net5.0/publish/ app/
WORKDIR /app
ENTRYPOINT ["dotnet", "MyApi.dll"]
And I finally realized that I had the bin folder in my .dockerignore file.
A: Check your .dockerignore file. Possible it ignores needed files for copy command and you get failed to compute cache key error.
.dockerignore may be configured to minimize the files sent to docker for performance and security:
*
!dist/
The first line * disallows all files. The second line !dist/ allows the dist folder
This can cause unexpected behavior:
FROM nginx:latest
# Fails because of * in .dockerignore
# failed to compute cache key: "/nginx.conf.spa" not found: not found
# Fix by adding `!nginx.conf.spa` to .dockerignore
COPY nginx.conf.spa /etc/nginx/nginx.conf
RUN mkdir /app
# Works because of !dist/ in .dockerignore
COPY dist/spa /app
Belts and suspenders.
A: I had the same issue, I set the Docker environment to Windows in when adding Docker support. Even running in Visual Studio threw error to that. I changed the environment to Linux as my Docker is running in the Windows Subsystem for Linux (WSL).
Then I moved back to the terminal to run the commands.
I was able to resolve this by moving to the Solutions folder (Root folder).
And I did docker build like this:
docker build -t containername/tag -f ProjectFolder/Dockerfile .
Then I did docker run:
docker run containername/tag
A: The way Visual Studio does it is a little bit odd.
Instead of launching docker build in the folder with the Dockerfile, it launches in the parent folder and specifies the Dockerfile with the -f option.
I was using the demo project (trying to create a minimal solution for another question) and struck the same situation.
Setup for my demo project is
\WorkerService2 ("solution" folder)
+- WorkerService2.sln
+- WorkserService2 ("project" folder)
+- DockerFile
+- WorkerService2.csproj
+- ... other program files
So I would expect to go
cd \Workerservice2\WorkerService2
docker build .
But I get your error message.
=> ERROR [build 3/7] COPY [WorkerService2/WorkerService2.csproj, WorkerService2/] 0.0s
------
> [build 3/7] COPY [WorkerService2/WorkerService2.csproj, WorkerService2/]:
------
failed to compute cache key: "/WorkerService2/WorkerService2.csproj" not found: not found
Instead, go to the parent directory, with the .sln file and use the docker -f option to specify the Dockerfile to use in the subfolder:
cd \Workerservice2
docker build -f WorkerService2\Dockerfile --force-rm -t worker2/try7 .
docker run -it worker2/try7
Edit (Thanks Mike Loux, tblev & Goku):
Note the final dot on the docker build command.
For docker the final part of the command is the location of the files that Docker will work with. Usually this is the folder with the Dockerfile in, but that's what's different about how VS does it. In this case the dockerfile is specified with the -f. Any paths (such as with the COPY instruction in the dockerfile) are relative to the location specified. The . means "current directory", which in my example is \WorkerService2.
I got to this stage by inspecting the output of the build process, with verbosity set to Detailed.
If you choose Tools / Options / Projects and Solutions / Build and Run you can adjust the build output verbosity, I made mine Detailed.
Edit #2 I think I've worked out why Visual Studio does it this way.
It allows the project references in the same solution to be copied in.
If it was set up to do docker build from the project folder, docker would not be able to COPY any of the other projects in the solution in. But the way this is set up, with current directory being the solution folder, you can copy referenced projects (subfolders) into your docker build process.
A: Asking for a directory that does not exist throws this error.
In my case, I tried
> [stage-1 7/14] COPY /.ssh/id_rsa.pub /.ssh/:
------
failed to compute cache key: "/.ssh/id_rsa.pub" not found: not found
I had forgotten to add the /.ssh folder to the project directory. In your case you should check whether /client is really a subfolder of your Dockerfile build context.
A: I had the same issue. In my case there was a wrong directory specified.
My Dockerfile was:
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS publish
WORKDIR /app
COPY . .
RUN dotnet publish -c Release -o publish/web src/MyApp/MyApp.csproj
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=publish publish/web .
EXPOSE 80
CMD ASPNETCORE_URLS=http://*:$PORT dotnet MyApp.dll
Then I realised that in the second build stage I am trying to copy project files from directory publish/web:
COPY --from=publish publish/web .
But as I specified workdir /app in the first stage, my files are located in that directory in image filesystem, so changing path from publish/web to app/publish/web resolved my issue.
So my final working Dockerfile is:
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS publish
WORKDIR /app
COPY . .
RUN dotnet publish -c Release -o publish/web src/MyApp/MyApp.csproj
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=publish app/publish/web .
EXPOSE 80
CMD ASPNETCORE_URLS=http://*:$PORT dotnet MyApp.dll
A: In my case there was a sneaky trailing whitespace in the file name.
------
> [3/3] COPY init.sh ./:
------
failed to compute cache key: "/init.sh" not found: not found
So the file was actually called "init.sh " instead of "init.sh".
A: I had a similar issues: Apparently, docker roots the file system during build to the specified build directory for security reasons. As a result, COPY and ADD cannot refer to arbitrary locations on the host file system. Additionally, there are other issues with syntax peculiarities. What eventually worked was the following:
COPY ./script_file.sh /
RUN /script_file.sh
A: I had faced the same issue.
The reason was the name of the DLL file in the Docker file is case sensitive.
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY MyFirstMicroService.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c release -o /app
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "**MyFirstMicroService.dll**"]
This .dll name should match your .csproj file.
A: This also happens when you don't provide the proper path to your COPY command input. The most important clue I had is that WORKDIR command opens a folder for the container, not in the windows explorer (so it doesn't affect the path you need to specify for the COPY command).
A: In my Case,
i was doing mistake in '/' and ''. Let me explain
Open your dockerfile (it should be named as dockerfile only, not DockerFile or Dockerfile).
You may have something like this-
FROM mcr.microsoft.com/dotnet/runtime:5.0
COPY bin\Release\net5.0\publish .
ENTRYPOINT ["dotnet", "HelloDocker.dll"]
Replace COPY bin\Release\net5.0\publish . to COPY bin/Release/net5.0/publish .
A: in my case, it was a wrong Build with PATH configuration e.g. Docker build context
*
*Simple docker script
docker build .
where . is path to build context
*Gradle+Docker
docker {
dependsOn build
dependsOn dockerFilesCopy
name "${project.name}:${project.version}"
files "build" // path to build context
}
*Gradle+GitHub action
name: Docker build and push
on:
push:
branches: [ main ]
# ...
jobs:
build:
runs-on: ubuntu-latest
# ...
steps:
- name: Checkout
uses: actions/checkout@v2
# ...
- name: Build and export to Docker
uses: docker/build-push-action@v2
with:
# ...
file: src/main/docker/Dockerfile
context: ./build # path to build context
A: In my case, with Angular project, my project was in the folder called ex: My-Folder-Project and I was putting on Dockerfile COPY --from=publish app/dist/My-Folder-Project .
But of course the correct thing is put the "name" in your package.json like COPY --from=publish app/dist/name-in-package.json .
A: In my case I changed context, and path of Dockerfile within docker-compose.yml config:
services:
server:
# inheritance structru
extends:
file: ../../docker-compose.server.yml
# I recommend you to play with this paths
build:
context: ../../
dockerfile: ./apps/${APP_NAME}/Dockerfile
...
|
unknown
| |
d1669
|
train
|
1) The Kalman filter should not require massive, non linear scaling amounts of memory : it is only calculating the estimates based on 2 values - the initial value, and the previous value. Thus, you should expect that the amount of memory you will need should be proportional to the total amount of data points. See : http://rsbweb.nih.gov/ij/plugins/kalman.html
2) Switching over to floats will 1/2 the memory required for your calculation . That will probably be insignificant in your case - I assume that if the data set is crashing due to memory, you are running your JVM with a very small amount of memory or you have a massive data set.
3) If you really have a large data set ( > 1G ) and halving it is important, the library you mentioned can be refactored to only use floats.
4) For a comparison of java matrix libraries, you can checkout http://code.google.com/p/java-matrix-benchmark/wiki/MemoryResults_2012_02 --- the lowest memory footprint libs are ojAlgo, EJML, and Colt.
Ive had excellent luck with Colt for large scale calculations - but I'm not sure which ones implement the Kalaman method.
|
unknown
| |
d1671
|
train
|
sqlite3_bind_text() wants a pointer to the entire string, not only the first character. (You need to understand how C pointers and strings (character arrays) work.)
And the sqlite3_bind_text() documentation tells you to use five parameters:
sqlite3_bind_text(res, 1, updatedName.c_str(), -1, SQLITE_TRANSIENT);
|
unknown
| |
d1673
|
train
|
You may use ClippingMediaSource:
ClippingMediaSource(MediaSource mediaSource, long startPositionUs, long endPositionUs)
Creates a new clipping source that wraps the specified source and provides samples between the specified start and end position.
You can convert to have a new media source and set this new media source for your ExoPlayer:
// Create a new media source with your specified period
val newMediaSource = ClippingMediaSource(mediaSource, 0, 5_000_000)
|
unknown
| |
d1675
|
train
|
The web.config sample in the question is using StateServer mode, so the out-of-process ASP.NET State Service is storing state information. You will need to configure the State Service; see an example of how to do that in the "STATESERVER MODE(OUTPROC MODE)" section here:
https://www.c-sharpcorner.com/UploadFile/484ad3/session-state-in-Asp-Net/
Also be sure to read the disadvantages section of the above linked article to make sure this approach is acceptable for your needs.
Another way to manage user session is using the InProc mode to manage sessions via a worker process. You can then get and set HttpSessionState properties as shown here:
https://www.c-sharpcorner.com/UploadFile/3d39b4/inproc-session-state-mode-in-Asp-Net/
and also here:
https://learn.microsoft.com/en-us/dotnet/api/system.web.sessionstate.httpsessionstate?view=netframework-4.8#examples
Again be sure to note the pros and cons of InProc mode in the above linked article to determine what approach best fits your needs.
|
unknown
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.