QuestionId
stringlengths
8
8
AnswerId
stringlengths
8
8
QuestionBody
stringlengths
91
22.3k
QuestionTitle
stringlengths
17
149
AnswerBody
stringlengths
48
20.9k
76378346
76378431
Regular Expression for text box validation. I need to validate textbox. conditions like below can enter numbers and letters. need to avoid these characters !@#$%^&*+= Currently i am using below regular expression. but it is not working. <input type="text" pattern="^[\w-\.]+@([\w-]+\.)+[\w-]{2,4}$" /> can u help me to find correct regular expression
How to modify a regular expression to validate a text box with letters and numbers only, excluding certain characters like !@#$%^&*+=
The regulation expression you are currently using is for validate email addresses. So you have to use a regular expression to validate textboxes, make it only contain letters and numbers and avoid the characters (!@#$%^&*+=). ^[a-zA-Z0-9]+$ This will match any strings that is a letter or a number. It will not match any strings that any characters !@#$%^&*+=. <input type="text" pattern="^[a-zA-Z0-9]+$" /> Hope this will help. Thank you! Refer to this link if you need more clarifications https://laasyasettyblog.hashnode.dev/validating-username-using-regex Improving my answer with the question u asked, <!DOCTYPE html> <html> <head> <title>Test Pattern</title> </head> <body> <input type="text" id="input" pattern='^[a-zA-Z0-9./|()?~<>":;]+$' /> <button onclick="validate()">Validate</button> <script> function validate() { var input = document.getElementById("input").value; var pattern = /[a-zA-Z0-9./|()?~<>":;]+/; if (pattern.test(input)) { alert("Valid"); } else { alert("Invalid"); } } </script> </body> </html>
76378408
76378435
I'm trying to create a blind auction. So basically it will ask for your name and then your bid. After that, it will ask if there are any bidders, if yes it will ask you the name and the bid price. But after you said yes the terminal will be cleaned. So that the other bidder can't see how much the other person bid, if I run print on the [data_base] it can't print more than two keys and value. Here is the output: What is your name?: Gael What is your bid: $560 Are there any other bidders? Type 'yes or 'no'. yes \[({'Gael': \['560'\]},)\] What is your name?: Mikey What is your bid: $350 Are there any other bidders? Type 'yes or 'no'. yes \[({'Mikey': \['350'\]},)\] What is your name?: Josh What is your bid: $298 Are there any other bidders? Type 'yes or 'no'. no Here is the final output: [({'Mikey': ['350']},), ({'Josh': ['298']},)] Gael's name and his bid are missing. Here is the code: import os while True: name = input("What is your name?: ") bid = input("What is your bid: $") other_user = input("Are there any other bidders? Type 'yes or 'no'.\n") if other_user == 'yes': os.system('cls') data_base = [ ] def new_user(name, bid): brandnew_user = { name: [bid] }, data_base.append(brandnew_user) new_user(name, bid) print(data_base) if other_user == 'no': break Thank you!! I was expecting that Gael's name and bid will be recorded. But it did not, it only recorded, Mikey and Josh.
How can i add new key and value in a existing list
Here's a better way to organize things. Also, I'm not sure why you are creating a list of tuples of dictionaries. Why not just make data_base a dictionary and store the new entries as keys? import os data_base = [] while True: name = input("What is your name?: ") bid = input("What is your bid: $") data_base.append( {name: [bid]} ) print(data_base) other_user = input("Are there any other bidders? Type 'yes or 'no'.\n") if other_user == 'no': break Here's what I'm talking about: import os data_base = {} while True: name = input("What is your name?: ") bid = input("What is your bid: $") data_base[name] = [bid] print(data_base) other_user = input("Are there any other bidders? Type 'yes or 'no'.\n") if other_user == 'no': break
76378340
76378439
I'm getting error in Android Studio on second "cannot resolve symbol second" how to fix it so that it loops from 358 to 331 in this example? package com.example.myapp; import androidx.annotation.NonNull; import androidx.appcompat.app.AppCompatActivity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.RelativeLayout; import com.pierfrancescosoffritti.androidyoutubeplayer.core.player.YouTubePlayer; import com.pierfrancescosoffritti.androidyoutubeplayer.core.player.listeners.AbstractYouTubePlayerListener; import com.pierfrancescosoffritti.androidyoutubeplayer.core.player.views.YouTubePlayerView; public class FingerStretching extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_finger_stretching); YouTubePlayerView youTubePlayerView = findViewById(R.id.youtube_player_view); getLifecycle().addObserver(youTubePlayerView); youTubePlayerView.addYouTubePlayerListener(new AbstractYouTubePlayerListener() { String videoId = "mSZWSQSSEjE"; @Override public void onReady(@NonNull YouTubePlayer youTubePlayer) { youTubePlayer.loadVideo(videoId, 331); } public void onCurrentSecond(@NonNull YouTubePlayer youTubePlayer) { if(second == 358) youTubePlayer.seekTo(331); } }); } } tried creating local variable second
How to repeat video with start and end time in Android Studio?
According to the source code, the signature of onCurrentSecond is override fun onCurrentSecond(youTubePlayer: YouTubePlayer, second: Float) You are not overriding it. It should be @Override public void onCurrentSecond(@NonNull YouTubePlayer youTubePlayer, float second) { if(second >= 358) youTubePlayer.seekTo(331); } Such kind of error is easily avoidable if you make use of the auto complete feature in the IDE. Typing onC within the AbstractYouTubePlayerListener should give you auto complete option for onCurrentSecond, selecting it should automatically write the override function for you with correct signature.
76378344
76378496
How to use React functions in CodePen? I wrote a react function in CodePem to test React hooks, however it constantly keeps reporting errors: Uncaught ReferenceError: require is not defined. My Code: import {useState, useEffect,useRef } from 'react'; function Test() { const [count, setCount] = useState(0); const prevRef = useRef(); useEffect(() => { // const ref = useRef(); console.log('ref----', prevRef.current); prevRef.current = count; }) return ( <div> <div onClick={() => setCount(count+1)}>+1</div> <div>{`count: ${count}`}</div> <div>{`precount: ${prevRef.current}`}</div> </div> ) } ReactDOM.render(<Test />, document.getElementById("app"));
how to use react function in codepen?
You can add a package by adjusting the settings in your Pen. Take a look at the following image for reference: By doing so, it will automatically generate the necessary import statement: import React, { useState, useEffect, useRef } from 'https://esm.sh/[email protected]'; import ReactDOM from 'https://esm.sh/[email protected]'; To help you understand this process, I've created a sample code on CodePen. You can refer to this example to implement it yourself. Here is the codepen link to the sample code: https://codepen.io/camel2243/pen/ExdBRar
76378323
76378505
The code I currently have is this, in my views.py I can't figure out how to set up my search function. All other functions work. models.py class User(AbstractUser): """User can be Employee or Customer""" class Business(models.Model): business = models.CharField(max_length=50) class BusinessOwner(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, null=True ) business = models.ForeignKey(Business, on_delete=models.CASCADE, null=True) class Customer(models.Model): """ Customer-specific information """ user = models.OneToOneField(User, on_delete=models.CASCADE, null=True ) business = models.ForeignKey(Business, on_delete=models.CASCADE, null=True) class Employee(models.Model): """ Employee-specific information """ user = models.OneToOneField(User, on_delete=models.CASCADE, null=True) business = models.ForeignKey(Business, on_delete=models.CASCADE, null=True, blank=True)` forms.py class UserForm(UserCreationForm): class Meta: model = User fields = ( "username", "email", "password1", "password2", "first_name", "last_name", ) class BusinessOwnerForm(forms.ModelForm): . . . no fields class EmployeeForm(forms.ModelForm): . . . no fields class CustomerForm(forms.ModelForm): . . . no fields class BusinessForm(forms.ModelForm): class Meta: model = Business fields = ( "business", ) views.py (user creation process) def searchUsers(request): qs_owned_businesses = BusinessOwner.objects.filter(user = request.user).values('business_id') qs_biz_customers = Customer.objects.filter(business_id__in=qs_owned_businesses) if request.method == "GET": query = request.GET.get('search') if query == '': query = 'None' results = User.objects.filter(username__icontains=query, id__in=qs_biz_customers) return render(request, 'search_users.html', {'query': query, 'results': results}) #example of hows employees and customers are created in my views: def employeeCreation(request): """Creates an Employee""" if request.method == "POST": employee_form = EmployeeForm(request.POST) user_creation_form = UserForm(request.POST) if (user_creation_form.is_valid() and employee_form.is_valid()): employee_form.instance.business = request.user.businessowner.business new_user = user_creation_form.save(commit=False) employee_form.instance.user = new_user user_creation_form.save() employee_form.save() messages.success(request, "You Have Created An Employee" ) return redirect("user-homepage") else: messages.error(request, "Try creating an Employee Again something went wrong.") employee_form = EmployeeForm() user_creation_form = UserForm() return render (request, "registration/employee_creation.html", context={"user_creation_form": user_creation_form, "employee_form": employee_form, }) def customerCreation(request): . . . functions is exactly the same as employee creation just for a customer. The Business owner's business is used as a starting point to build employees off of. I didn't incldue that view because it's not necessary for this and stack overflow limits how much code I put here. search_users.html {% if results %} you Searched for {{ query }} . . {% for x in results %} {{ x }}<p></p> {% endfor %} {%endif %}``` I have tried using Q, icontain ,.filter() and django-filter, but this is a tricky search criteria that I can't get to work. navbar search feature: <form action="{% url 'search-users' %}" class="form-inline" method="get"> <div class="form-group mx-sm-3 mb-2"> <label for="" class="sr-only">search</label> <input name="search" type="" class="form-control" id="" placeholder="Keyword"> </div> <button type="submit" class="btn btn-success btn-lg mb-2">Search</button> </form>```
Search Customers that are part of the logged on User's Business?
Let's break this down into tasks. I'm using values() to limit the request to what we're interested in, as I can then use that result to filter further. #First you want to get all the businesses the logged in user owns #(Currently they can only own one, so you could use get rather than filter, #but you might change that later and this approach will still work) qs_owned_businesses = BusinessOwner.objects.filter(user = request.user).values('business_id') #next you want to get all the customers of those businesses qs_biz_customers = Customer.objects.filter(business_id__in= qs_owned_businesses).values('user_id') #finally you want to filter those customers further based on your form field #remember, the icontains criteria needs to refer to a field #here we're looking at username, but you might use last_name or something else. results = User.objects.filter(username__icontains=query, id__in=qs_biz_customers) results should now be a list of users you can cycle through in your template to show names, usernames etc.
76378468
76378523
ALL, I made a local branch in my fork a long time ago and pushed some changes to it. I then submitted a PR which was passed the CI build. Now after some time I came back to the same machine I produced the PR but for some reason I didn't check which branch I was on and made couple of commits on the old branch and pushed them therefore screwing up the PR (it was not yet merged, due to the lack of the test code). Now what I'd like to do is go to Github Web interface, remove those commits, but keep them locally, because I can just generate a patch on my local machine, remove those commits, switch to the new branch and apply the patch to it. Or maybe even there is a better solution? So how do I solve this mess? Keep in mind - I intend to finish the PR with the test, but those are 2 completely unrelated things. TIA!! EDIT: Everythig worked fine and my old branch on the original laptop is back to normal and the PR is now good. However, in order to put the unit test I had to go to a different machine and do a git pull. For some unknown reason after that the git tree on that machine becomes clogged with everything including the bad commit. I was able to revoke bad commits with git reset --hard N, but I fear that the same happen when I try to test my unit test on all platforms/different laptops which means my changes will be lost and I will need to redo them again for the UT on all different machines. Can you help me here as well? TIA!!
Remove remote commits on the branch in GitHub
After some thought, my original answer is more complicated than strictly necessary, but I'll leave it below. The easiest way to get your original branch back to its old state and keep the new commits is to create a new branch then reset the old branch and force push. It looks like this: git checkout old-branch git branch new-branch git reset --hard <hash of commit you want to keep in old-branch> git push -f Alternatively you can use git reset --hard HEAD~n where n is the number of commits you want to remove from the old branch. Now you can do whatever you wish with the new branch, such as rebase it onto main. This might not be entirely necessary. If for example, your PR is merged, you will need to pull those changes into the new branch anyway before making the second PR. However, if you want to make a 2nd PR before the 1st is merged, then it is better to keep them separate until one of them is merged. TLDR The easiest way to fix a remote repository is to first make the changes locally and then push, possibly force push, to GitHub or other remote. Details You can do this all locally first, then push to GitHub to fix the PR. First, you should create a new branch and git cherry-pick the commits that you want to keep but remove from the other branch. Start by getting the hashes of the commits you want: git checkout old-branch git log --oneline --graph Copy the commit hashes for the commits you want to move. Then do git checkout -b new-branch main and for each of the hashes you copied: git cherry-pick <hash> Alternatively, you can do this more easily with git rebase. You only need the hash of the oldest commit you want to keep: git checkout -b new-branch old-branch git rebase --onto main <hash of oldest commit>~ Now go back to your old branch and get rid of all the commits you no longer want: git checkout old-branch git reset --hard <hash of the first commit you want to keep on this branch> Finally force push: git push -f This will automatically update the PR back to its original state, if you used the correct hash for the git reset command.
76378419
76378558
I am creating a google chrome extension. On the popup, I am displaying a leaderboard. However, I am new to JavaScript so I don't know how to properly use async. I am using chrome.storage to get stored scores to display on the leaderboard, then sending them from background.js to score.js. My issue is that, since chrome.storage.get happens asynchronously, my findScores method does not wait for chrome.storage.get to finish before incorrectly returning a default empty score. Here is my code: background.js chrome.runtime.onMessage.addListener( function(request, sender, sendResponse) { console.log(sender.tab ? "from a content script:" + sender.tab.url : "from the extension"); if (request.type === "request") { var scoresVar = findScores(request.table, "All"); console.log("Sending response " + scoresVar); sendResponse({scores: scoresVar}) } else if (request.type === "score") { saveScore(request.website, request.score, request.tab); sendResponse("Finished adding score " + request.score); } } ); function findScores(table, website) { const categories = table.split("-"); if (categories.includes("personal")) { chrome.storage.sync.get([website], function(response) { if (!(typeof response[website] === 'undefined')) { console.log("Found " + response[website]); return response[website]; } }); } else if (categories.includes("global")){ // TODO: Add global leaderboards return ["-"]; } console.log("Didn't find, on default"); return ["-"]; } popup.js async function requestScores(tableID) { var url = "All" if (tableID.includes("current")) { var url = await getCurrentTab(); } console.log("Sending message to load scores to " + url); (async () => { const response = await chrome.runtime.sendMessage({type: "request", request: "load scores", table: tableID, tab: url}); console.log("Received: " + response); // add scores to HTML DOM }); })(); } My console messages reveal that I first return a default score, which is sent to popup.js. I have tried throwing async keywords in front of functions (as well as "await" in front of variables like scoresVar = await findScores(request.table, "All") but it just caused more issues, where findScores still returned a default value, but background.j instead sent an undefined promise. How can I fix my code?
How to use async properly to get chrome.storage?
It is simpler to work with Promises and async/await instead of callbacks. chrome.storage.sync.get returns a Promise if you do not pass a callback. async function findScores(table, website) { // ... if (categories.includes("personal")) { const response = await chrome.storage.sync.get([website]); if (response[website] !== undefined) { console.log("Found " + response[website]); return response[website]; } } // ... } // ... chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) { // ... findScores(request.table, "All").then(scores => { console.log("Sending response " + scores); sendResponse({scores}); }); return true; // keep the messaging channel open for sendResponse }); Note that the callback of onMessage should return a literal true value (documentation) in order to keep the internal messaging channel open so that sendResponse can work asynchronously.
76383950
76384028
I have a text field and it has an onSubmit method, inside which I check for validation and then focus on another field, but for some reason the focus does not work onSubmitted: (value) { //print("ga test"); if (!widget.validator?.call(value)) { setState(() { showError = true; }); } if (widget.nextFocus != null) { FocusScope.of(context).requestFocus(widget.nextFocus); } },
How do I change the focus of the text field on Submit?
I did so and it worked if (widget.validator != null) { setState(() { showError = !widget.validator?.call(value); }); } if (widget.nextFocus != null) { FocusScope.of(context).requestFocus(widget.nextFocus); }
76380624
76380646
I have an application that was executing TestNG tests perfectly with maven, for example, when using a mvn clean install command. Currently I have updated the application to start using Spring Boot 3.1.0, and now the tests are completely ignored. No tests are executed. I am using a classic testng.xml file defined on the maven-surefire-plugin: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>${maven-surefire-plugin.version}</version> <configuration> <suiteXmlFiles> <suiteXmlFile>src/test/resources/testng.xml</suiteXmlFile> </suiteXmlFiles> </configuration> </plugin> All solutions I have found are related about the java classes ending on *Test.java but this is not applied as I am using the testng suite file. And before the update, the tests are working fine. What has been changed into Spring Boot 3 to skip my tests?
Testng test are ignored after upgrading to Sprint Boot 3 and maven-surefire-plugin 3.1.0
Ok, I have found the "issue". Seems that the new versions of maven-surefire-plugin needs to include a surefire-testng extra plugin for executing it: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>3.1.0</version> <configuration> <suiteXmlFiles> <suiteXmlFile>src/test/resources/testng.xml</suiteXmlFile> </suiteXmlFiles> </configuration> <dependencies> <dependency> <groupId>org.apache.maven.surefire</groupId> <artifactId>surefire-testng</artifactId> <version>3.1.0</version> </dependency> </dependencies> </plugin> After including the dependency on the plugin, now is working fine.
76380600
76380785
I'm using Okta provider to create okta_app_oauth and okta_app_group_assignments. My module looks like: resource "okta_app_oauth" "app" { label = var.label type = var.type grant_types = var.grant_types redirect_uris = var.type != "service" ? var.redirect_uris : null response_types = var.response_types login_mode = var.login_mode login_uri = var.login_uri post_logout_redirect_uris = var.post_logout_redirect_uris consent_method = var.consent_method token_endpoint_auth_method = var.token_endpoint_auth_method pkce_required = var.token_endpoint_auth_method == "none" ? true : var.pkce_required lifecycle { ignore_changes = [ client_basic_secret, groups ] } } resource "okta_app_group_assignments" "app" { app_id = okta_app_oauth.app.id dynamic "group" { for_each = var.app_groups content { id = group.value["id"] priority = group.value["priority"] } } } And it works when I assign groups to application, but when I don't want to assign groups, I get error: │ Error: Invalid index │ │ on main.tf line 26, in resource "okta_app_group_assignments" "app": │ 26: id = group.value["id"] │ ├──────────────── │ │ group.value is empty map of dynamic │ │ The given key does not identify an element in this collection value. in addition, my app_groups variable looks like: variable "app_groups" { description = "Groups assigned to app" type = list(map(any)) default = [{}] } I was trying to use lookup(group, "priority", null), but it wasn't resolving my problem. Can somebody help me with solving this?
Terragrunt - make dynamic group optional
You can make the block optional as follows: dynamic "group" { for_each = length(var.app_groups) > 0 : var.app_groups : [] content { id = group.value["id"] priority = group.value["priority"] } } also your default value for app_groups should be: variable "app_groups" { description = "Groups assigned to app" type = list(map(any)) default = [] }
76378487
76378586
I have a table PetsTable: Id Type key value 1 "Cat" 10 5 1 "Cat" 9 2 2 "dog" 10 5 1 "Cat" 8 4 1 "Cat" 6 3 2 "dog" 8 4 2 "dog" 6 3 3 "Cat" 13 5 3 "Cat" 10 0 3 "Cat" 8 0 How to insert this data into a new table MyPets from PetsTable with these conditions: Group by Id Only select rows when in the group exists (key = 10 and value = 5) and (key = 8 and value = 4) and (key = 6 and value = 3) If exists key = 9, then mark hasFee = 1 else hasFee = 0 Final table should look like: Id Type hasFee 1 "Cat" 1 2 "dog" 0
Group by and select rows based on if value combinations exist
One approach is to use window functions to evaluate your conditions, which you can then apply as conditions using a CTE. This creates the data you desire, its then trivial to insert into a table of your choice. create table Test (Id int, [Type] varchar(3), [Key] int, [Value] int); insert into Test (Id, [Type], [Key], [Value]) values (1, 'Cat', 10, 5), (1, 'Cat', 9, 2), (2, 'Dog', 10, 5), (1, 'Cat', 8, 4), (1, 'Cat', 6, 3), (2, 'Dog', 8, 4), (2, 'Dog', 6, 3), (3, 'Cat', 13, 5), (3, 'Cat', 10, 0), (3, 'Cat', 8, 0); with cte as ( select * , sum(case when [Key] = 10 and [Value] = 5 then 1 else 0 end) over (partition by Id) Cond1 , sum(case when [Key] = 8 and [Value] = 4 then 1 else 0 end) over (partition by Id) Cond2 , sum(case when [Key] = 6 and [Value] = 3 then 1 else 0 end) over (partition by Id) Cond3 , sum(case when [Key] = 9 then 1 else 0 end) over (partition by Id) HasFee from Test ) select Id, [Type], HasFee from cte where Cond1 = 1 and Cond2 = 1 and Cond3 = 1 group by Id, [Type], HasFee; Returns: Id Type HasFee 1 Cat 1 2 Dog 0 Note: If you provide your sample data in this format (DDL+DML) you make it much easier for people to assist. db<>fiddle
76380579
76380826
For work I'm needing to connect to test nodes and establish a vnc connection so you can see the desktop remotely. It's a manual process with a bunch of commands that need to be executed in order. Perfect for automation using a bash script. The problem is that some commands need to be executed on the remote node after an ssh connection is established. Currently I've got it working like this, where startVNC is a seperate bash file which stores the commands that need to be executed on the remote node after an ssh connection is established. cat startVNC | sed -e "s/\$scaling/$scaling/" -e "s/\$address/$address/" -e "s/\$display/$display/" | ssh -X maintain@$host For my question the contents of startVNC don't really matter, just that multiple commands can be executed in order. It could be: echo "hello" sleep 1 echo "world" While for personal use this solution is fine, I find it a bit of a bother that this needs to be done using two separate bash files. If I want to share this file (which I do) it'd be better if it was just one file. My question is, is it possible to mimic the output from cat in some way using a variable?
How to store multiple commands in a bash variable (similar to cat otherscript.sh)
Well, you could do: a="echo 'hello'\nsleep 2\necho world\n" echo -e $a # output-> echo 'hello' # output-> sleep 2 # output-> echo world echo -e $a | bash # output-> hello # waiting 2 secs # output-> world The -e in echo enables the interpretation of the \n.
76383957
76384041
I have a demo Spring Integration project which is receiving Kafka messages, aggregating them, and then releasing them. I'm trying to add JdbcMessageStore to the project. The problem is that it failing with error: Caused by: java.lang.IllegalArgumentException: Cannot store messages without an ID header at org.springframework.util.Assert.notNull(Assert.java:201) ~[spring-core-5.2.15.RELEASE.jar:5.2.15.RELEASE] at org.springframework.integration.jdbc.store.JdbcMessageStore.addMessage(JdbcMessageStore.java:314) ~[spring-integration-jdbc-5.3.8.RELEASE.jar:5.3.8.RELEASE] After debugging I found that it requires the UUID header id in this message. But the problem is that I can't manually set the Kafka header id - it is forbidden (the same as timestamp header) - I tried to do this in Kafka producer in different project. If I'm using IDEA plugin named Big Data Tools and send a message from there I'm able to set id header but it is received by my project as an array of bytes and it is failing with error IllegalArgumentException Incorrect type specified for header 'id'. Expected [UUID] but actual type is [B] I can't find any solution on how to resolve this issue. I need to set somehow this id header to be able to store messages in the database. Thanks in advance
How to set ID header in Spring Integration Kafka Message?
The KafkaMessageDrivenChannelAdapter has an option: /** * Set the message converter to use with a record-based consumer. * @param messageConverter the converter. */ public void setRecordMessageConverter(RecordMessageConverter messageConverter) { Where you can set a MessagingMessageConverter with: /** * Generate {@link Message} {@code ids} for produced messages. If set to {@code false}, * will try to use a default value. By default set to {@code false}. * @param generateMessageId true if a message id should be generated */ public void setGenerateMessageId(boolean generateMessageId) { this.generateMessageId = generateMessageId; } /** * Generate {@code timestamp} for produced messages. If set to {@code false}, -1 is * used instead. By default set to {@code false}. * @param generateTimestamp true if a timestamp should be generated */ public void setGenerateTimestamp(boolean generateTimestamp) { this.generateTimestamp = generateTimestamp; } set to true. This way the Message created from a ConsumerRecord will have respective id and timestamp headers. You also simply can have a "dummy" transformer to return incoming payload and the framework will create a new Message where those headers are generated.
76383902
76384109
I have some SQL that does some manipulation to the data i.e. filling in empty columns. SELECT *, ModifiedLineData = CASE WHEN Column2 = '' AND LineData NOT LIKE ',,,0,,,,0' THEN CONCAT(STUFF(LineData, CHARINDEX(',', LineData, CHARINDEX(',', LineData) + 1), 0, '"No PO Number"'), ',""') ELSE CONCAT(LineData, ',""') END FROM ( SELECT *, Column2 = CONVERT(XML, '<s>' + REPLACE((SELECT ISNULL(LineData, '') FOR XML PATH('')), ',', '</s><s>') + '</s>').value('/s[2]', 'varchar(100)') FROM [dbo].[Temp_Raw_Data] WHERE LineData NOT LIKE ',,,0,,,,0' ) AS Subquery Now lets say this returns FileName LineNumber LineData Column2 ModifiedLineData file1 4 1232,,"product-1", 1,0 1232,NA,"product-1", 1,0 file2 7 "failed" NULL "failed" file3 8 1235,,"product-2", 1,0 1235,NA,"product-2", 1,0 How can I modify this query so that if Column2 is NULL then it would concatenate the LineData onto the next row (ModifiedLineData) else just concatenate a ,"" and then remove that NULL result (if possible else it doesnt matter) so that my result would look like: FileName LineNumber LineData Column2 ModifiedLineData file1 4 1232,,"product-1", 1,0 1232,NA,"product-1", 1,0,"" file3 8 1235,,"product-2", 1,0 1235,NA,"product-2", 1,0,"failed" I tried playing around with LEAD() but couldn't get it how i wanted. Note: Two null rows are not possible to be together. This is due to the nature of the data. The next row should simply be the next available row when selecting all rows as they are imported one by 1. Updated Query that isn't concatenating: SELECT * FROM (SELECT FileName, LineNumber, LineData, Column2, CASE WHEN LAG(Column2) OVER(ORDER BY LineNumber) IS NULL THEN CONCAT_WS(', ', ModifiedLineData, LAG(ModifiedLineData) OVER(ORDER BY LineNumber)) ELSE ModifiedLineData END AS ModifiedLineData FROM ( SELECT *, ModifiedLineData = CASE WHEN Column2 = '' AND LineData NOT LIKE ',,,0,,,,0' THEN CONCAT(STUFF(LineData, CHARINDEX(',', LineData, CHARINDEX(',', LineData) + 1), 0, '"No PO Number"'), '') ELSE CONCAT(LineData, '') END FROM ( SELECT *, Column2 = CONVERT(XML, '<s>' + REPLACE((SELECT ISNULL(LineData, '') FOR XML PATH('')), ',', '</s><s>') + '</s>').value('/s[2]', 'varchar(100)') FROM [backstreet_WMS_Optimizer].[dbo].[Temp_GoodsIn_Raw_Data] WHERE LineData NOT LIKE ',,,0,,,,0' ) AS Subquery ) AS cte ) AS Subquery WHERE Column2 IS NOT NULL order by FileName, LineNumber
Concatenate onto Next Row
Given that you can't have consecutive NULL values, using LEAD/LAG should be suitable for this task. Without knowledge of your original data, we can work on your query and add on top two subqueries, last of which is optional: the inner adds the information needed to the record successive to "Column2=NULL" records the outer removes records having those null values SELECT * FROM (SELECT FileName, LineNumber, LineData, Column2, CASE WHEN LAG(Column2) OVER(ORDER BY LineNumber) IS NULL THEN CONCAT_WS(', ', ModifiedLineData, LAG(ModifiedLineData) OVER(ORDER BY LineNumber)) ELSE ModifiedLineData END AS ModifiedLineData FROM <your query>) cte WHERE Column2 IS NOT NULL Output: FileName LineNumber LineData Column2 ModifiedLineData file1 4 1232,,"product-1", 1,0 1232,NA,"product-1", 1,0 file3 8 1235,,"product-2", 1,0 1235,NA,"product-2", 1,0"failed" Check the demo here.
76378480
76378588
I'm working through The Odin Project and I'm having trouble making my main content take up the rest of the space of the browser. Right now it looks like this: The 1px solid red border is as far as the main content goes. I have tried this but it's not allowing for a fixed header and footer. I have also tried some other flex solutions. Those are commented out in the code. Am I just doing this whole thing wrong? Is there a standard way that I don't know about? index.html: <body> <div class="header"> <h1> MY AWESOME WEBSITE </h1> </div> <div class="main-content"> <div class="sidebar"> <ul> <li><a href="#">⭐ - link one</a></li> <li><a href="#">🦸🏽‍♂️ - link two</a></li> <li><a href="#">🖌️ - link three</a></li> <li><a href="#">👌🏽 - link four</a></li> </ul> </div> <div class="content"> <div class="card">Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempora, eveniet? Dolorem dignissimos maiores non delectus possimus dolor nulla repudiandae vitae provident quae, obcaecati ipsam unde impedit corrupti veritatis minima porro?</div> <div class="card">Lorem ipsum dolor sit amet consectetur adipisicing elit. Quasi quaerat qui iure ipsam maiores velit tempora, deleniti nesciunt fuga suscipit alias vero rem, corporis officia totam saepe excepturi odit ea. </div> <div class="card">Lorem ipsum dolor sit amet consectetur, adipisicing elit. Nobis illo ex quas, commodi eligendi aliquam ut, dolor, atque aliquid iure nulla. Laudantium optio accusantium quaerat fugiat, natus officia esse autem?</div> <div class="card">Lorem ipsum dolor sit amet consectetur adipisicing elit. Necessitatibus nihil impedit eius amet adipisci dolorum vel nostrum sit excepturi corporis tenetur cum, dolore incidunt blanditiis. Unde earum minima laboriosam eos!</div> <div class="card">Lorem ipsum dolor sit amet consectetur, adipisicing elit. Nobis illo ex quas, commodi eligendi aliquam ut, dolor, atque aliquid iure nulla. Laudantium optio accusantium quaerat fugiat, natus officia esse autem?</div> <div class="card">Lorem ipsum dolor sit amet consectetur adipisicing elit. Necessitatibus nihil impedit eius amet adipisci dolorum vel nostrum sit excepturi corporis tenetur cum, dolore incidunt blanditiis. Unde earum minima laboriosam eos!</div> </div> </div> <div class="footer"> The Odin Project ❤️ </div> </body> </html> style-07.css: :root{ --header-height: 72px; } body { font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif; margin: 0; min-height: 100vh; height: 100%; } .main-content{ display: flex; height: 100%; /* If I use px units it will force the main content to go down but I know that is not ideal. */ padding-top: var(--header-height); flex-direction: row; border: 1px solid red; /* Things I have tried from other answers*/ /* flex: 1 1 auto; */ /* height: calc(100% - var(--header-height)); */ } .sidebar{ flex-shrink: 0; } .content { padding: 32px; display: flex; flex-wrap: wrap; } .card { width: 300px; padding: 16px; margin: 16px; } .header { position: fixed; top: 0; left: 0; right: 0; display: flex; align-items: center; height: var(--header-height); background: darkmagenta; color: white; padding: 0px 15px; } h1 { font-weight: 1000; } .footer { height: var(--header-height); background: #eee; color: darkmagenta; position: fixed; bottom: 0; left: 0; right: 0; width: 100%; height: 5%; display: flex; justify-content: center; align-items: center; } .sidebar { width: 300px; background: royalblue; box-sizing: border-box; padding: 16px; } .card { border: 1px solid #eee; box-shadow: 2px 4px 16px rgba(0, 0, 0, .06); border-radius: 4px; } ul{ list-style-type: none; margin: 0; padding: 0; } a { text-decoration: none; color: white; font-size: 24px; } li{ margin-bottom: 16px; }
How do I get my main content to take up the rest of the space left over after the header and footer?
You can use flex diplay on body instead of using instead of fixed on header and footer and make the body display flex with column direction, then for main-content all you need is to set flex: 1 and remove padding top, flex: 1 will make sure that main-content take any remaining space in the parent. Set the body height to height: 100vh and overflow: hidden, for man-content, set overflow: auto. Additionally, To make sidebar sticky when scrolling, I added position: relative; to main-content and position: sticky; to the sidebar. To force header and footer heights and prevent them to be squeezed by the flex position, use min-height instead of height as I modified in the code. Try to view the run code in full page, if you have any further questions, comment below. :root { --header-height: 72px; } body { font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif; margin: 0; height: 100vh; overflow:hidden; display: flex; flex-direction: column; } .main-content { flex: 1; display: flex; overflow-y: auto; /* If I use px units it will force the main content to go down but I know that is not ideal. */ flex-direction: row; border: 1px solid red; /* Things I have tried from other answers*/ /* flex: 1 1 auto; */ /* height: calc(100% - var(--header-height)); */ position: relative; } .content { padding: 32px; display: flex; flex-wrap: wrap; } .card { width: 300px; padding: 16px; margin: 16px; } .header { display: flex; align-items: center; min-height: var(--header-height); background: darkmagenta; color: white; padding: 0px 15px; } h1 { font-weight: 1000; } .footer { min-height: var(--header-height); background: #eee; color: darkmagenta; width: 100%; height: 5%; display: flex; justify-content: center; align-items: center; } .sidebar { width: 300px; background: royalblue; box-sizing: border-box; padding: 16px; position: sticky; top: 0; white-space: nowrap; min-height: 250px; } .card { border: 1px solid #eee; box-shadow: 2px 4px 16px rgba(0, 0, 0, .06); border-radius: 4px; } ul { list-style-type: none; margin: 0; padding: 0; } a { text-decoration: none; color: white; font-size: 24px; } li { margin-bottom: 16px; } <body> <div class="header"> <h1> MY AWESOME WEBSITE </h1> </div> <div class="main-content"> <div class="sidebar"> <ul> <li><a href="#">⭐ - link one</a></li> <li><a href="#">🦸🏽‍♂️ - link two</a></li> <li><a href="#">🖌️ - link three</a></li> <li><a href="#">👌🏽 - link four</a></li> </ul> </div> <div class="content"> <div class="card">Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempora, eveniet? Dolorem dignissimos maiores non delectus possimus dolor nulla repudiandae vitae provident quae, obcaecati ipsam unde impedit corrupti veritatis minima porro?</div> <div class="card">Lorem ipsum dolor sit amet consectetur adipisicing elit. Quasi quaerat qui iure ipsam maiores velit tempora, deleniti nesciunt fuga suscipit alias vero rem, corporis officia totam saepe excepturi odit ea. </div> <div class="card">Lorem ipsum dolor sit amet consectetur, adipisicing elit. Nobis illo ex quas, commodi eligendi aliquam ut, dolor, atque aliquid iure nulla. Laudantium optio accusantium quaerat fugiat, natus officia esse autem? </div> <div class="card">Lorem ipsum dolor sit amet consectetur adipisicing elit. Necessitatibus nihil impedit eius amet adipisci dolorum vel nostrum sit excepturi corporis tenetur cum, dolore incidunt blanditiis. Unde earum minima laboriosam eos!</div> <div class="card">Lorem ipsum dolor sit amet consectetur, adipisicing elit. Nobis illo ex quas, commodi eligendi aliquam ut, dolor, atque aliquid iure nulla. Laudantium optio accusantium quaerat fugiat, natus officia esse autem? </div> <div class="card">Lorem ipsum dolor sit amet consectetur adipisicing elit. Necessitatibus nihil impedit eius amet adipisci dolorum vel nostrum sit excepturi corporis tenetur cum, dolore incidunt blanditiis. Unde earum minima laboriosam eos!</div> </div> </div> <div class="footer"> The Odin Project ❤️ </div> </body> </html>
76384080
76384128
For whatever reason, my Kotlin program won't initialize variables assigned inside a when statement. Here's the code: import kotlin.random.Random import kotlin.random.nextInt val mood: String when(Random.nextInt(1..2)) { 1 -> { mood = "loud" println("$mood") } 2 -> { mood = "quiet" println("$mood") } } println("$mood") The lines inside the when statement are printed, but when I run the last line, I get a "Variable 'mood' must be initialized" error. I don't know what I could possibly be doing wrong here...
Can't initialize variables inside of when statement in Kotlin
In Kotlin, variables declared with the val keyword must be initialized at the point of declaration or in the constructor of the class. In your code, the mood variable is declared without an initial value, and you are trying to assign values to it inside the when statement. However, the compiler is unable to determine if either of the branches will be executed at runtime, so it doesn't consider the variable as fully initialized. To fix this issue, you can either declare the mood variable as a var instead of a val or assign an initial value to it when declaring it. Here's an updated version of your code using a var: import kotlin.random.Random import kotlin.random.nextInt var mood: String when (Random.nextInt(1..2)) { 1 -> { mood = "loud" println("$mood") } 2 -> { mood = "quiet" println("$mood") } } println("$mood") By using a var instead of a val, you indicate that the variable can be reassigned later. Since the mood variable is assigned within both branches of the when statement, the compiler no longer complains about it being uninitialized. Note that the order of the when branches should cover all possible cases, otherwise you might encounter a "when expression must be exhaustive" warning. In your case, the range of nextInt is 1 to 2, so the two branches should be sufficient.
76380728
76380908
My deep link works fine on Android and transfers information to the app, but it doesn't work on iOS Firebase Link https://dvzpl.com my short link https://dvzpl.com/6BG2 my domain https://dovizpanel.com/ my associated domain <dict> <key>aps-environment</key> <string>development</string> <key>com.apple.developer.associated-domains</key> <array> <string>webcredentials:dvzpl.com</string> <string>applinks:dvzpl.com</string> </array> </dict> how to fix ? When I open the short link in the browser, it goes into the app but does not transfer the data in ios , android working not problams <key>FirebaseDynamicLinksCustomDomains</key> <array> <string>https://dovizpanel.com/blog</string> <string>https://dovizpanel.com/exchanger</string> <string>https://dovizpanel.com/link</string> </array>
Flutter Deep Link Firebase in iOS
If you are using a custom domain for firebase dynamic links follow the instructions below: In your Xcode project's Info.plist file, create a key called FirebaseDynamicLinksCustomDomains and set it to your app's Dynamic Links URL prefixes. For example: <key>FirebaseDynamicLinksCustomDomains</key> <array> <string>https://dvzpl.com</string> </array> You can find more details directly in the Firebase documentation.
76384218
76384269
my question is how can I toggle/display the "Some text" content on onClick individually?. I can use different function and state for every div an it is working but I know this is not the correct way to do it . Can you help me with this guys? Thanks This is my code function App() { const [loaded, setLoaded] = useState(true); const [show, setShow] = useState(false); const handleShow = () => { setShow(!show); }; return ( <div className={styles.App}> {loaded && ( <div className={styles.cards_container}> <div className={styles.card_container} onClick={handleShow}> <h3>Title</h3> {show && ( <div> <p>Some text</p> </div> )} </div> <div className={styles.card_container} onClick={handleShow}> <h3>Title</h3> {show && ( <div> <p>Some text</p> </div> )} </div> <div className={styles.card_container} onClick={handleShow}> <h3>Title</h3> {show && ( <div> <p>Some text</p> </div> )} </div> </div> )} </div> ); }
How to toggle/display content individually in ReactJS
You could create a custom component for your card that handles the state for each card: function Card() { const [show, setShow] = useState(false); const handleShow = () => { setShow(state => !state); }; return <div className={styles.card_container} onClick={handleShow}> <h3>Title</h3> {show && ( <div> <p>Some text</p> </div> )} </div> } And use it in your app: function App() { const [loaded, setLoaded] = useState(true); return ( <div className={styles.App}> {loaded && ( <div className={styles.cards_container}> <Card /> <Card /> <Card /> </div> )} </div> ); }
76383839
76384284
The following query was used as part of a security audit to identify users with access to install/uninstall server plugins at the database level. SELECT user, host FROM mysql.db WHERE db = 'mysql' and (insert_priv='y') or (delete_priv='y') or (insert_priv='y' and delete_priv='y'); I need to revoke that permission from the users that are listed. Is there a specific privilege I revoke to do this? If so, I can't find it. Or would I simply UPDATE the insert_priv and delete_priv fields directly in the mysql.db table? I'm not a DBA but the closest thing we have at the moment.
Revoking permission to install plugins?
You are able to install plugins when you have INSERT permissions on the mysql.plugin table, see INSTALL PLUGIN: To use INSTALL PLUGIN, you must have the INSERT privilege for the mysql.plugin table. So when you have database wide INSERT permissions on the (internal administrative) database mysql, then you can install plugins. The same goes for the UNINSTALL PLUGIN statement, see UNINSTALL PLUGIN To use UNINSTALL PLUGIN, you must have the DELETE privilege for the mysql.plugin table. Remove the insert_priv and delete_priv privileges for the mysql database, your "normal" MySQL user accounts shouldn't be able to write in this database anyway.
76378670
76378715
I am new to pandas, I have this data frame: df['educ1'] which gives 1 4 2 3 3 3 4 4 5 1 .. 28461 3 28462 2 28463 3 28464 2 28465 4 Name: educ1, Length: 28465, dtype: int64 when I try querying with dt=df[df.educ1 > 1] It's working fine returning multiple rows, but when I try college_grad_mask=(df.educ1 > 1) df.where(college_grad_mask).dropna().head() It gives 0 rows, I wonder what is wrong here?
pandas dataframe query not working with where
You likely have NaNs in many columns, try to subset: df.where(college_grad_mask).dropna(subset=['educ1']).head() Or better: df[college_grad_mask].head()
76378383
76378734
I'm learning tidymodels. The following code runs nicely: library(tidyverse) library(tidymodels) # Draw a random sample of 2000 to try the models set.seed(1234) diamonds <- diamonds %>% sample_n(2000) diamonds_split <- initial_split(diamonds, prop = 0.80, strata="price") diamonds_train <- training(diamonds_split) diamonds_test <- testing(diamonds_split) folds <- rsample::vfold_cv(diamonds_train, v = 10, strata="price") metric <- metric_set(rmse,rsq,mae) # Model KNN knn_spec <- nearest_neighbor( mode = "regression", neighbors = tune("k"), engine = "kknn" ) knn_rec <- recipe(price ~ ., data = diamonds_train) %>% step_log(all_outcomes()) %>% step_normalize(all_numeric_predictors()) %>% step_dummy(all_nominal_predictors()) knn_wflow <- workflow() %>% add_model(knn_spec) %>% add_recipe(knn_rec) knn_grid = expand.grid(k=c(1,5,10,30)) knn_res <- tune_grid( knn_wflow, resamples = folds, metrics = metric, grid = knn_grid ) collect_metrics(knn_res) autoplot(knn_res) show_best(knn_res,metric="rmse") # Best KNN best_knn_spec <- nearest_neighbor( mode = "regression", neighbors = 10, engine = "kknn" ) best_knn_wflow <- workflow() %>% add_model(best_knn_spec) %>% add_recipe(knn_rec) best_knn_fit <- last_fit(best_knn_wflow, diamonds_split) collect_metrics(best_knn_fit) But when I try to fit the best model on the training set and applying it to the test set I run into problems. The following two lines give me the error : "Error in step_log(): ! The following required column is missing from new_data in step 'log_mUSAb': price. Run rlang::last_trace() to see where the error occurred." # Predict Manually f1 = fit(best_knn_wflow,diamonds_train) p1 = predict(f1,new_data=diamonds_test)
Problem when scoring new data -- tidymodels
This problem is related to log transform outcome variable in tidymodels workflow For log transformations to the outcome, we strongly recommend that those transformation be done before you pass them to the recipe(). This is because you are not guaranteed to have an outcome when predicting (which is what happens when you last_fit() a workflow) on new data. And the recipe fails. You are seeing this here as when you predict on a workflow() object, it only passes the predictors, as it is all that it needs. Hence why you see this error. Since log transformations isn't a learned transformation you can safely do it before. diamonds_train$price <- log(diamonds_train$price) if (!is.null(diamonds_test$price)) { diamonds_test$price <- log(diamonds_test$price) }
76380693
76380922
Is it possible to name a term created in a formula? This is the scenario: Create a toy dataset: set.seed(67253) n <- 100 x <- sample(c("A", "B", "C"), size = n, replace = TRUE) y <- sapply(x, switch, A = 0, B = 2, C = 1) + rnorm(n, 2) dat <- data.frame(x, y) head(dat) #> x y #> 1 B 4.5014474 #> 2 C 4.0252796 #> 3 C 2.4958761 #> 4 C 0.6725571 #> 5 B 4.3364206 #> 6 C 3.9798909 Fit a regression model: out <- lm(y ~ x, dat) summary(out) #> #> Call: #> lm(formula = y ~ x, data = dat) #> #> Residuals: #> Min 1Q Median 3Q Max #> -2.07296 -0.52161 -0.03713 0.53898 2.12497 #> #> Coefficients: #> Estimate Std. Error t value Pr(>|t|) #> (Intercept) 2.1138 0.1726 12.244 < 2e-16 *** #> xB 1.6772 0.2306 7.274 9.04e-11 *** #> xC 0.5413 0.2350 2.303 0.0234 * #> --- #> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 #> #> Residual standard error: 0.9297 on 97 degrees of freedom #> Multiple R-squared: 0.3703, Adjusted R-squared: 0.3573 #> F-statistic: 28.52 on 2 and 97 DF, p-value: 1.808e-10 Fit the model again, but use "C" as the reference group: out2 <- lm(y ~ relevel(factor(x), ref = "C"), dat) summary(out2) #> #> Call: #> lm(formula = y ~ relevel(factor(x), ref = "C"), data = dat) #> #> Residuals: #> Min 1Q Median 3Q Max #> -2.07296 -0.52161 -0.03713 0.53898 2.12497 #> #> Coefficients: #> Estimate Std. Error t value Pr(>|t|) #> (Intercept) 2.6551 0.1594 16.653 < 2e-16 *** #> relevel(factor(x), ref = "C")A -0.5413 0.2350 -2.303 0.0234 * #> relevel(factor(x), ref = "C")B 1.1359 0.2209 5.143 1.41e-06 *** #> --- #> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 #> #> Residual standard error: 0.9297 on 97 degrees of freedom #> Multiple R-squared: 0.3703, Adjusted R-squared: 0.3573 #> F-statistic: 28.52 on 2 and 97 DF, p-value: 1.808e-10 The variable, x, was re-leveled in the second call to lm(). This is done in the formula and so the name of this term is relevel(factor(x), ref = "C"). Certainly, we can create the term before calling lm(), e.g.: dat$x2 <- relevel(factor(x), ref = "C") out3 <- lm(y ~ x2, dat) summary(out3) #> #> Call: #> lm(formula = y ~ x2, data = dat) #> #> Residuals: #> Min 1Q Median 3Q Max #> -2.07296 -0.52161 -0.03713 0.53898 2.12497 #> #> Coefficients: #> Estimate Std. Error t value Pr(>|t|) #> (Intercept) 2.6551 0.1594 16.653 < 2e-16 *** #> x2A -0.5413 0.2350 -2.303 0.0234 * #> x2B 1.1359 0.2209 5.143 1.41e-06 *** #> --- #> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 #> #> Residual standard error: 0.9297 on 97 degrees of freedom #> Multiple R-squared: 0.3703, Adjusted R-squared: 0.3573 #> F-statistic: 28.52 on 2 and 97 DF, p-value: 1.808e-10 However, can I create a term and name it in the formula? If yes, how?
How to name a term created in the formula when calling `lm()`?
adapted from the info in this comment : Rename model terms in lm object for forecasting set.seed(67253) n <- 100 x <- sample(c("A", "B", "C"), size = n, replace = TRUE) y <- sapply(x, switch, A = 0, B = 2, C = 1) + rnorm(n, 2) dat <- data.frame(x, y) out <- lm(y ~ x, dat) summary(out) out2 <- lm(y ~ x2, transform(dat, x2=relevel(factor(x), ref = "C"))) summary(out2)
76378708
76378750
I am trying to translate a Stata code from a paper into R. The Stata code looks like this: g tau = year - temp2 if temp2 > temp3 & (bod<. | do<. | lnfcoli<.) My R translation looks like this: data <- data %>% mutate(tau = if_else((temp2 > temp3) & (is.na(bod) | is.na(do) | is.na(lnfcoli)), year - temp2, NA_integer_)) The problem is that when I run each code I get different results. This is the result I get when I run the code in Stata: 1 Year | temp2 | temp3 | bod | do | lnfcoli | tau | 2 1986 | 1995 | 1986 | 3.2 | 7.2 | 2.1. | -9 | This is the result I get when I run the code in R: 1 Year | temp2 | temp3 | bod | do | lnfcoli | tau | 2 1986 | 1995 | 1986 | 3.2 | 7.2 | 2.1. | NA | Do you know what might be wrong with my R code or what should I modify to get the same output?
Translating Stata to R yields different results
None of bod, do or lnfcoli are missing (NA), so your logic returns FALSE and returns NA_integer_ (false= in the if_else). Stata treats . or missing values as positive infinity, so that check is actually looking for not missing. So the equivalent in R/dplyr is probably: data %>% mutate( tau = if_else( (temp2 > temp3) & (!(is.na(bod) | is.na(do) | is.na(lnfcoli))), year-temp2, NA_integer_ ) ) # year temp2 temp3 bod do lnfcoli tau #1 1986 1995 1986 3.2 7.2 2.1 -9
76383859
76384297
This c++ code cannot compile: #include <iostream> int main() { constexpr int kInt = 123; struct LocalClass { void func(){ const int b = std::max(kInt, 12); // ^~~~ // error: use of local variable with automatic storage from containing function std::cout << b; } }; LocalClass a; a.func(); return 0; } But this works: #include <iostream> #include <vector> int main() { constexpr int kInt = 123; struct LocalClass { void func(){ const int b = std::max((int)kInt, 12); // added an extra conversion "(int)" std::cout << b; const int c = kInt; // this is also ok std::cout << c; const auto d = std::vector{kInt}; // also works std::cout << d[0]; } }; LocalClass a; a.func(); return 0; } Tested under C++17 and C++20, same behaviour.
Why sometimes local class cannot access constexpr variables defined in function scope
1. odr-using local entities from nested function scopes Note that kInt still has automatic storage duration - so it is a local entity as per: 6.1 Preamble [basic.pre] (7) A local entity is a variable with automatic storage duration, [...] In general local entities cannot be odr-used from nested function definitions (as in your LocalClass example) This is given by: 6.3 One-definition rule [basic.def.odr] (10) A local entity is odr-usable in a scope if: [...] (10.2) for each intervening scope between the point at which the entity is introduced and the scope (where *this is considered to be introduced within the innermost enclosing class or non-lambda function definition scope), either: the intervening scope is a block scope, or the intervening scope is the function parameter scope of a lambda-expression that has a simple-capture naming the entity or has a capture-default, and the block scope of the lambda-expression is also an intervening scope. If a local entity is odr-used in a scope in which it is not odr-usable, the program is ill-formed. So the only times you can odr-use a local variable within a nested scope are nested block scopes and lambdas which capture the local variable. i.e.: void foobar() { int x = 0; { // OK: x is odr-usable here because there is only an intervening block scope std::cout << x << std::endl; } // OK: x is odr-usable here because it is captured by the lambda auto l = [&]() { std::cout << x << std::endl; }; // NOT OK: There is an intervening function definition scope struct K { int bar() { return x; } }; } 11.6 Local class declarations [class.local] contains a few examples of what is and is not allowed, if you're interested. So if use of kInt constitutes an odr-use, your program is automatically ill-formed. 2. Is naming kInt always an odr-use? In general naming a variable constitutes an odr-use of that variable: 6.3 One-definition rule [basic.def.odr] (5) A variable is named by an expression if the expression is an id-expression that denotes it. A variable x that is named by a potentially-evaluated expression E is odr-used by E unless [...] But because kInt is a constant expression the special exception (5.2) could apply: 6.3 One-definition rule [basic.def.odr] (5.2) x is a variable of non-reference type that is usable in constant expressions and has no mutable subobjects, and E is an element of the set of potential results of an expression of non-volatile-qualified non-class type to which the lvalue-to-rvalue conversion is applied, or So naming kInt is not deemed an odr-use as long as it ... is of non-reference type (✓) is usable in constant expressions (✓) does not contain mutable members (✓) and the expression that contains kInt ... must produce a non-volatile-qualified non-class type (✓) must apply the lvalue-to-rvalue conversion (?) So we pass almost all the checks for the naming of kInt to not be an odr-use, and therefore be well-formed. The only condition that is not always true in your example is the lvalue-to-rvalue conversion that must happen. If the lvalue-to-rvalue conversion does not happen (i.e. no temporary is introduced), then your program is ill-formed - if it does happen then it is well-formed. // lvalue-to-rvalue conversion will be applied to kInt: // (well-formed) const int c = kInt; std::vector v{kInt}; // vector constructor takes a std::size_t // lvalue-to-rvalue conversion will NOT be applied to kInt: // (it is passed by reference to std::max) // (ill-formed) std::max(kInt, 12); // std::max takes arguments by const reference (!) This is also the reason why std::max((int)kInt, 12); is well-formed - the explicit cast introduces a temporary variable due to the lvalue-to-rvalue conversion being applied.
76380850
76380929
Let's say I have a React Select with a placeholder ('Selected Value: '), and I want to keep the placeholder and append it into the selected value so that it looks something like ('Selected Value: 1'). Is there any way to do it? import Select from "react-select"; export default function App() { const options = [ { value: 1, label: 1 }, { value: 2, label: 2 }, { value: 3, label: 3 }, { value: 4, label: 4 } ]; const placeholder = "Selected Value: "; return ( <div className="App"> <Select options={options} placeholder={placeholder} /> </div> ); } codesandbox: https://codesandbox.io/s/brave-chatterjee-pjol2d?file=/src/App.js:23-385 EDIT: Sorry, forget to mention, I do not want the placeholder to directly be in the labels of the options
How do I keep and append placeholder text into the selected value in React Select?
you can accept my answer import Select from "react-select"; import { useState } from "react"; export default function App() { const [selectBoxValue, setSelectBoxValue] = useState('') const options = [ { value: 1, label: 1 }, { value: 2, label: 2 }, { value: 3, label: 3 }, { value: 4, label: 4 } ]; const placeholder = `Selected Value: ${selectBoxValue}`; return ( <div className="App"> <Select options={options} placeholder={placeholder} value={placeholder} onChange={(event) => setSelectBoxValue(event.value)} /> </div> ); }
76380934
76380982
Installed FlareSolverr in docker. cURL work correctly and return the correct response. curl -L -X POST 'http://localhost:8191/v1' -H 'Content-Type: application/json' --data-raw '{ "cmd": "request.get", "url":"http://google.com", "maxTimeout": 60000 }' but when using from python + flask I get an error - 405 Method is not allowed def get_parsed_page(url, delay=0.5): data = { "cmd": "request.get", "url": url, "maxTimeout": 60000 } headers = {"Content-Type": "application/json"} time.sleep(delay) print(requests.get("***:8191/v1", headers=headers, data=data)) return BeautifulSoup(requests.get("***:8191/v1", headers=headers, data=data).text, 'lxml')
Method not allowed, flask, python
you are using a GET request in your python code. It should be a POST request. Use requests.post
76378592
76378760
I was having this problem about last week in this code a = int(input()) b = int(input()) c = int(input()) print(min(a+b,b+c,c+a)) so when I enter three input like this: 2 5 6 (three interger in 1 line) It show me a error: File "c:\Users\Administrator\Documents\Code\Python\baitap(LQDOJ)\EZMIN.py", line 1, in <module> a = int(input()) ValueError: invalid literal for int() with base 10: '2 5 6' and I see that it only identify 'a' but not identify 'b' , 'c' so can you show me how to fix it or are there other ways to write it in 1 line?
Can't not write three value in 1 line
Method 1 The error you're encountering is because you're trying to convert the entire string '2 5 6' into an integer using the int() function. However, the int() function expects a single integer value, not a string containing multiple numbers. code: a = int(input()) b = int(input()) c = int(input()) x = a + b y = b + c z = c + a min_value = x if y < min_value: min_value = y if z < min_value: min_value = z print("The minimum value is:", min_value) you'll be prompted to enter the values for a, b, and c separately, and the code will correctly calculate and display the minimum value among the three sums. Method 2 Using This one is more optimize solution input_values = input() input_list = list(map(int, input_values.split())) min_value = min(input_list[0] + input_list[1], input_list[1] + input_list[2], input_list[2] + input_list[0]) print("The minimum value is:", min_value) The split() method splits the input string at spaces, creating a list of string elements. The map() function applies the int() function to each element of the split list, converting them into integers. list() is used to convert the resulting map object into a list of integers. The resulting list is stored in input_list for further calculations.
76383945
76384326
I try to define a custom interfaces like this : export interface IAPIRequest<B extends any, P extends any, Q extends any> { body: B; params: P; query: Q; } This type is supposed to be extended in a lot of other types for each request mu API is supposed to handle. For example : export interface ILoginRequest extends IAPIRequest<{ email: string; password: string; }>, undefined, undefined> {} It works a little but everytime I use this interface, I must provide all the properties even if they are undefined. Example: const login = async ({ body }: ILoginRequest) => { ... } const response = await login({ body: { email: '[email protected]', password: 'verystrongpassword' }, params: undefined, query: undefined }); It doesn't work if I don't provide the undefined properties. How can I define an abstract type for IAPIRequest that would avoid me from providing undefined values ? PS : I've tried this as well export interface IAPIRequest<B extends any, P extends any, Q extends any> { body?: B; params?: P; query?: Q; } Even for IAPIRequest<B, P, Q> where none of B, P, or Q allow undefined, I still get that the properties might be undefined
Typescript type extension
TypeScript doesn't automatically treat properties that accept undefined to be optional (although the converse, treating optional properties as accepting undefined, is true, unless you've enabled --exactOptionalPropertyTypes). There is a longstanding open feature request for this at microsoft/TypeScript#12400 (the title is about optional function parameters, not object properties, but the issue seems to have expanded to include object properties also). Nothing has been implemented there, although the discussion describes various workarounds. Let's define our own workaround; a utility type UndefinedIsOptional<T> that produces a version of T such that any property accepting undefined is optional. It could look like this: type UndefinedIsOptional<T extends object> = (Partial<T> & { [K in keyof T as undefined extends T[K] ? never : K]: T[K] } ) extends infer U ? { [K in keyof U]: U[K] } : never That's a combination of Partial<T> which turns all properties optional, and a key remapped type that suppresses all undefined-accepting properties. The intersection of those is essentially what you want (an intersection of an optional prop and a required prop is a required prop) but I use a technique described at How can I see the full expanded contract of a Typescript type? to display the type in a more palatable manner. Then we can define your type as type IAPIRequest<B, P, Q> = UndefinedIsOptional<{ body: B; params: P; query: Q; }> and note that this must be a type alias and not an interface because the compiler needs to know exactly which properties will appear (and apparently their optional-ness) to be an interface. This won't matter much with your example code but you should be aware of it. Let's test it out: type ILR = IAPIRequest<{ email: string; password: string; }, undefined, undefined> /* type ILR = { body: { email: string; password: string; }; params?: undefined; query?: undefined; } */ That looks like what you wanted, so you can define your ILoginRequest interface: interface ILoginRequest extends IAPIRequest< { email: string; password: string; }, undefined, undefined> { } Also, let's just look at what happens when the property includes undefined but is not only undefined: type Other = IAPIRequest<{ a: string } | undefined, number | undefined, { b: number }>; /* type Other = { body?: { a: string; } | undefined; params?: number | undefined; query: { b: number; }; } */ Here body and params are optional because undefined is possible, but query is not because undefined is impossible. Playground link to code
76380868
76380985
This Quarkus mailer guide requires that the sending email is preconfigured in property file: [email protected]. However, my use case for email includes unique originator email based on user. Using the provided method looks something like: public void sendEmail(EmailSender emailSender) { // Send to each recipient emailMessageRepository.findByEmailSenderId(emailSender.getId()) .forEach(emailMessage -> mailer.send( Mail.withText(emailMessage.getEmail(), emailSender.getSubject(), emailSender.getMessage()) ); ); } How can I include the sender's email address (i.e. 'from') when the Mail.withText() method only provides for recipient email?
How to configure the Quarkus Mailer extension to allow dynamic 'from' email addresses based on user?
The documention showcases how to use multimailer (Multiple From Addresses) [email protected] quarkus.mailer.host=smtp.gmail.com [email protected] quarkus.mailer.aws.host=${ses.smtp} quarkus.mailer.aws.port=587 [email protected] quarkus.mailer.sendgrid.host=${sendgrid.smtp-host} quarkus.mailer.sendgrid.port=465 So you would write: [email protected] [email protected] [email protected] Then you would inject them as shown below and use them based on whom you want to send with: @Inject @MailerName("aws") Mailer mailer; @Inject @MailerName("sendgrid") Mailer mailer; aws and sendgrid at the names between quarkus.mailer.xxx.from https://quarkus.io/guides/mailer-reference#multiple-mailer-configurations The Quarkus Mailer is implemented on top of the Vert.x Mail Client, providing an asynchronous and non-blocking way to send emails. If you need fine control on how the mail is sent, for instance if you need to retrieve the message ids, you can inject the underlying client, and use it directly: @Inject MailClient client; Then use it: MailMessage message = new MailMessage(); message.setFrom("[email protected] (Example User)"); message.setTo("[email protected]"); message.setCc("Another User <[email protected]>"); message.setText("this is the plain message text"); message.setHtml("this is html text <a href=\"http://vertx.io\">vertx.io</a>"); To send using MailClient: mailClient.sendMail(message) .onSuccess(System.out::println) .onFailure(Throwable::printStackTrace); https://quarkus.io/guides/mailer-reference#using-the-underlying-vert-x-mail-client https://vertx.io/docs/vertx-mail-client/java/
76380847
76380988
This is a question from rust onomicon # lifetime The first example can compile, as x is a reference and the compiler can infer its lifetime as minimal as the last use here :println!(), so x is dropped after this line. let mut data = vec![1, 2, 3]; let x = &data[0]; println!("{}", x); // This is OK, x is no longer needed data.push(4); But the case is different when x is a struct implemented Drop trait. #[derive(Debug)] struct X<'a>(&'a i32); impl Drop for X<'_> { fn drop(&mut self) {} } let mut data = vec![1, 2, 3]; let x = X(&data[0]); println!("{:?}", x); data.push(4); // Here, the destructor is run and therefore this'll fail to compile. The onomicon says in this case, drop() is only executed at the very end of a scope, so x keeps valid until the last line. But why the compiler cannot minimize the lifetime of x to the last use? And is applying drop() just after the last use has some nontrivial side effects when x is implemented Drop trait?
Why Drop trait is only executed at the end of the scope, instead of after the last use?
The primary reason is that it was once defined to be like that, and now changing it isn't possible any more because it wouldn't be backwards-compatible and might break stuff. Your code is easily fixable by introducing a nested scope, though, which is (to my understanding) best practice in those situations: #[derive(Debug)] struct X<'a>(&'a i32); impl Drop for X<'_> { fn drop(&mut self) {} } fn main() { let mut data = vec![1, 2, 3]; { let x = X(&data[0]); println!("{:?}", x); } data.push(4); } X(1) Alternatively, you could drop it manually: #[derive(Debug)] struct X<'a>(&'a i32); impl Drop for X<'_> { fn drop(&mut self) {} } fn main() { let mut data = vec![1, 2, 3]; let x = X(&data[0]); println!("{:?}", x); drop(x); data.push(4); } X(1)
76384211
76384348
I've a microservice architecture, and need some common logic. When it's necessary to create a custom spring boot starter rather than create a new microservice?
When it's necessary to create a custom spring boot starter rather than create a new microservice?
In my experience, creating a new microservice from the ground up is generally due to preventing any monoliths occurring. Microservices should generally have one job and then do it well. You don't want to muddy up the implementation and purpose of your microservice by adding unrelated operations. There are many design patterns for the "types" you could be creating but I won't go into too much detail there. Overall, based on what business purpose you are solving you can select your design and begin development. Different designs should be separated and not combined into monolithic styles. Here is a good article showcasing design options: https://www.openlegacy.com/blog/microservices-architecture-patterns/ If you find your self having to re-create multiple microservice serving different use cases you can always utilize a tool such as yeoman to speed up creating these new projects. You can build a generator that will give you a working template so you don't have to spend the time re developing from the ground up each time you need a different service. Here is a guide that I wrote recently on creating your own yeoman generator: https://medium.com/@dylanlamott/building-a-yeoman-generator-line-by-line-6966debb39a3
76378628
76378769
AttributeError: 'int' object has no attribute 'astype' in automatic WhatsApp message sender script The following is an automated WhatsApp message sender script I partially developed. I tried the following script and it worked fine with an excel with 5 numbers in it. However, I tried upscaling it to 1700+ numbers, and I get the following traceback: Traceback (most recent call last): File "c:\Users\MSI\Desktop\AutoSenderPY\main.py", line 9, in <module> cellphone = data.loc[i,'Cellphone'].astype(str) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'int' object has no attribute 'astype'* The script is the following: import pandas as pd import webbrowser as web import pyautogui as pg import time data = pd.read_excel("book1.xlsx", sheet_name='sheet1') for i in range(len(data)): cellphone = data.loc[i,'Cellphone'].astype(str) message = "Test Message" web.open("https://web.whatsapp.com/send?phone=" + cellphone + "&text=" + message) time.sleep(5.5) pg.click(1230,964) time.sleep(1) pg.press('enter') time.sleep(2) pg.hotkey('ctrl', 'w') time.sleep(1) Why is that happening, and how can I get it working for those 1700+ numbers?
How to fix 'int' object has no attribute 'astype' error when sending WhatsApp messages to large number of contacts using Python and pandas?
Try using - cellphone = str(data.loc[i,'Cellphone']) I think loc returns a single element of type "numpy.int64", calling the "str" should be enough.
76378370
76378800
I have two tables, one has course name and course ID. The second table has the ID of the students and the course ID they have taken. I need to find all the class ID’s of the classes a student hasn’t taken. For example, in table 2 student 03 has taken classes 01 and 02 but not 03 and 04 from table one. The course ID’s 03 and 04 from table one are what I need to return (all the classes student 03 hasn't taken). I've tried numerous queries and the last one I tried is: SELECT table1.* FROM table1 LEFT JOIN table2 ON table1.course_ID = table2.course_ID WHERE table2.course_ID IS NULL AND table2.user_ID != 3 Appreciate your help! table 1 course_ID courseName 01 math 02 English 03 art 04 music table 2 cert_Id course_ID user_ID 01 01 03 02 02 03
SQL How to return record ID's not included in table 2 from table 1 based off of user ID in table 2
As per your current requirement below query will work SELECT * FROM table1 t1 WHERE course_ID NOT IN (SELECT course_ID FROM table2 WHERE user_ID =3) If you have more records in table2 and if you need to populate more than one student's details then you have to use other logic If you want to modify your query then use as below SELECT table1.* FROM table1 LEFT JOIN table2 ON table1.course_ID = table2.course_ID AND table2.user_ID = 3 WHERE table2.course_ID IS NULL
76380967
76381059
In T-Sql I am parsing JSON and using PIVOT. Select * from (select [key],convert(varchar,[value])[value] from openjson ('{"Name":"tew","TabTypeId":9,"Type":3}'))A pivot(max(value) for [key] in ([Name],tabTypeId,[Type]))b It is not treating tabTypeId as equal to TabTypeId. I am getting NULL for tabTypeId. If I use TabTypeId I get the value 9. Why is it happening?
Why is SQL Server Pivot being case sensitive on TabTypeId instead of treating it as the actual column name?
It's not PIVOT that is case sensitive, it's the data returned from OPENJSON that is. If you check the data returned from it, you'll see that the column key is a binary collation: SELECT name, system_type_name, collation_name FROM sys.dm_exec_describe_first_result_set(N'SELECT [key], CONVERT(varchar, [value]) AS [value] FROM OPENJSON(''{"Name":"tew","TabTypeId":9,"Type":3}'');',NULL,NULL) name system_type_name collation_name key nvarchar(4000) Latin1_General_BIN2 value varchar(30) SQL_Latin1_General_CP1_CI_AS For binary collations the actual bytes of the characters must match. As such N'tabTypeId' and N'TabTypeId' are not equal as N'T' and N't' have the binary values 0x5400 and 0x7400. Though I am unsure why you are using PIVOT at all; just define your columns in your OPENJSON call: SELECT name, --Columns are intentionally demonstrating non-case sensitivity tabTypeId, type FROM OPENJSON('{"Name":"tew","TabTypeId":9,"Type":3}') WITH (Name varchar(3), TabTypeId int, Type int); Note that in the WITH clause of OPENJSON the column names are still case sensitive. tabTypeId int would also yield NULL. If you "had" to have a column called tabTypeId defined prior to the SELECT you would use tabTypeId int '$.TabTypeId' instead.
76384091
76384360
I have a query here that uses four subqueries inside a single CTE, and each subquery is scanning every row of another CTE for each row in itself. I would think that this is very inefficient. Are there any SQL optimizations that I can implement now that the proof of concept is finished? I don't have write access to the database, so optimizations would be required within the select clause. WITH datetable AS ( SELECT generate_series( DATE_TRUNC('week', (SELECT MIN(created_at) FROM org_accounts.deleted_users)), DATE_TRUNC('week', now()), '1 week'::INTERVAL )::DATE AS week_start ), all_users AS ( SELECT id, registered_at, NULL AS deleted_at FROM org_accounts.users WHERE status = 'active' AND org_accounts.__user_is_qa(id) <> 'Y' AND email NOT LIKE '%@org%' UNION ALL SELECT id, created_at AS registered_at, deleted_at FROM org_accounts.deleted_users WHERE deleter_id = id AND email NOT LIKE '%@org%' ), weekly_activity AS ( SELECT DATE_TRUNC('week', date)::DATE AS week_start, COUNT(DISTINCT user_id) AS weekly_active_users FROM ( SELECT user_id, date FROM org_storage_extra.stats_user_daily_counters WHERE type in ('created_file', 'created_folder', 'created_secure_fetch') UNION ALL SELECT user_id, date FROM ipfs_pinning_facility.stats_user_daily_counters WHERE type <> 'shares_viewed_by_others' ) activity_ids_dates WHERE EXISTS(SELECT 1 from all_users WHERE id = user_id) GROUP BY week_start ), preprocessed AS ( SELECT week_start, ( SELECT COUNT(DISTINCT id) FROM all_users WHERE registered_at < week_start AND (deleted_at IS NULL OR deleted_at > week_start) ) AS actual_users, ( SELECT COUNT(DISTINCT id) FROM all_users WHERE deleted_at < week_start + '1 week'::INTERVAL ) AS cumulative_churned_users, ( SELECT COUNT(DISTINCT id) FROM all_users WHERE registered_at >= week_start AND registered_at < week_start + '1 week'::INTERVAL ) AS weekly_new_users, ( SELECT COUNT(DISTINCT id) FROM all_users WHERE deleted_at >= week_start AND deleted_at < week_start + '1 week'::INTERVAL ) AS weekly_churned_users, COALESCE(weekly_active_users, 0) AS weekly_active_users FROM datetable dt LEFT JOIN weekly_activity USING (week_start) ORDER BY week_start DESC ) SELECT week_start AS for_week_of, actual_users + cumulative_churned_users AS cumulative_users, cumulative_churned_users, cumulative_churned_users::FLOAT / NULLIF((actual_users + cumulative_churned_users)::FLOAT, 0) AS cumulated_churn_rate, actual_users, weekly_new_users, weekly_churned_users, weekly_active_users, weekly_churned_users::FLOAT / NULLIF(actual_users::FLOAT, 0) AS weekly_churn_rate FROM preprocessed; Results of query analysis: QUERY PLAN Subquery Scan on preprocessed (cost=40875.45..7501783.95 rows=1000 width=68) (actual time=1553.471..13613.116 rows=231 loops=1) Output: preprocessed.week_start, (preprocessed.actual_users + preprocessed.cumulative_churned_users), preprocessed.cumulative_churned_users, ((preprocessed.cumulative_churned_users)::double precision / NULLIF(((preprocessed.actual_users + preprocessed.cumulative_churned_users))::double precision, '0'::double precision)), preprocessed.actual_users, preprocessed.weekly_new_users, preprocessed.weekly_churned_users, preprocessed.weekly_active_users, ((preprocessed.weekly_churned_users)::double precision / NULLIF((preprocessed.actual_users)::double precision, '0'::double precision)) Buffers: shared hit=287734 read=1964, temp read=274840 written=873 CTE all_users -> Append (cost=0.00..30953.99 rows=70293 width=32) (actual time=0.099..1313.372 rows=71228 loops=1) Buffers: shared hit=285995 read=1964 -> Seq Scan on org_accounts.users (cost=0.00..27912.65 rows=70009 width=32) (actual time=0.099..1289.469 rows=70007 loops=1) Output: users.id, users.registered_at, NULL::timestamp with time zone Filter: ((users.email !~~ '%@mailinator%'::text) AND (users.email !~~ '%@org%'::text) AND (users.email !~~ '%testaccnt%'::text) AND (users.status = 'active'::text) AND ((org_accounts.__user_is_qa(users.id))::text <> 'Y'::text)) Rows Removed by Filter: 9933 Buffers: shared hit=285269 read=1964 -> Seq Scan on org_accounts.deleted_users (cost=0.00..1986.94 rows=284 width=32) (actual time=0.014..14.267 rows=1221 loops=1) Output: deleted_users.id, deleted_users.created_at, deleted_users.deleted_at Filter: ((deleted_users.email !~~ '%@mailinator%'::text) AND (deleted_users.email !~~ '%@org%'::text) AND (deleted_users.email !~~ '%testaccnt%'::text) AND (deleted_users.deleter_id = deleted_users.id)) Rows Removed by Filter: 61826 Buffers: shared hit=726 -> Merge Left Join (cost=9921.47..7470794.97 rows=1000 width=44) (actual time=1553.467..13612.496 rows=231 loops=1) Output: (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date), (SubPlan 2), (SubPlan 3), (SubPlan 4), (SubPlan 5), COALESCE(weekly_activity.weekly_active_users, '0'::bigint) Inner Unique: true Merge Cond: ((((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date) = weekly_activity.week_start) Buffers: shared hit=287734 read=1964, temp read=274840 written=873 -> Sort (cost=1601.45..1603.95 rows=1000 width=4) (actual time=10.108..10.250 rows=231 loops=1) Output: (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date) Sort Key: (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date) DESC Sort Method: quicksort Memory: 35kB Buffers: shared hit=726 -> Result (cost=1514.10..1541.62 rows=1000 width=4) (actual time=9.986..10.069 rows=231 loops=1) Output: ((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date Buffers: shared hit=726 InitPlan 6 (returns $5) -> Aggregate (cost=1514.09..1514.10 rows=1 width=8) (actual time=9.974..9.975 rows=1 loops=1) Output: min(deleted_users_1.created_at) Buffers: shared hit=726 -> Seq Scan on org_accounts.deleted_users deleted_users_1 (cost=0.00..1356.47 rows=63047 width=8) (actual time=0.006..4.332 rows=63047 loops=1) Output: deleted_users_1.id, deleted_users_1.email, deleted_users_1.created_at, deleted_users_1.deleter_id, deleted_users_1.deleted_at, deleted_users_1.registration_app Buffers: shared hit=726 -> ProjectSet (cost=0.00..5.03 rows=1000 width=8) (actual time=9.984..10.030 rows=231 loops=1) Output: generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval) Buffers: shared hit=726 -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.000..0.001 rows=1 loops=1) -> Sort (cost=8320.02..8320.52 rows=200 width=12) (actual time=1475.315..1475.418 rows=159 loops=1) Output: weekly_activity.weekly_active_users, weekly_activity.week_start Sort Key: weekly_activity.week_start DESC Sort Method: quicksort Memory: 32kB Buffers: shared hit=287008 read=1964, temp read=412 written=872 -> Subquery Scan on weekly_activity (cost=8050.90..8312.37 rows=200 width=12) (actual time=1466.686..1475.279 rows=159 loops=1) Output: weekly_activity.weekly_active_users, weekly_activity.week_start Buffers: shared hit=287008 read=1964, temp read=412 written=872 -> GroupAggregate (cost=8050.90..8310.37 rows=200 width=12) (actual time=1466.685..1475.254 rows=159 loops=1) Output: ((date_trunc('week'::text, ("*SELECT* 1".date)::timestamp with time zone))::date), count(DISTINCT "*SELECT* 1".user_id) Group Key: ((date_trunc('week'::text, ("*SELECT* 1".date)::timestamp with time zone))::date) Buffers: shared hit=287008 read=1964, temp read=412 written=872 -> Sort (cost=8050.90..8136.22 rows=34130 width=20) (actual time=1466.668..1468.872 rows=23005 loops=1) Output: ((date_trunc('week'::text, ("*SELECT* 1".date)::timestamp with time zone))::date), "*SELECT* 1".user_id Sort Key: ((date_trunc('week'::text, ("*SELECT* 1".date)::timestamp with time zone))::date) Sort Method: quicksort Memory: 2566kB Buffers: shared hit=287008 read=1964, temp read=412 written=872 -> Hash Join (cost=1586.09..5481.12 rows=34130 width=20) (actual time=1411.350..1462.022 rows=23005 loops=1) Output: (date_trunc('week'::text, ("*SELECT* 1".date)::timestamp with time zone))::date, "*SELECT* 1".user_id Inner Unique: true Hash Cond: ("*SELECT* 1".user_id = all_users.id) Buffers: shared hit=287008 read=1964, temp read=412 written=872 -> Append (cost=0.00..3080.17 rows=68261 width=20) (actual time=0.010..25.441 rows=68179 loops=1) Buffers: shared hit=1013 -> Subquery Scan on "*SELECT* 1" (cost=0.00..1018.43 rows=21568 width=20) (actual time=0.008..7.895 rows=21532 loops=1) Output: "*SELECT* 1".date, "*SELECT* 1".user_id Buffers: shared hit=372 -> Seq Scan on org_storage_extra.stats_user_daily_counters (cost=0.00..802.75 rows=21568 width=20) (actual time=0.008..5.910 rows=21532 loops=1) Output: stats_user_daily_counters.user_id, stats_user_daily_counters.date Filter: (stats_user_daily_counters.type = ANY ('{created_file,created_folder,created_secure_fetch}'::text[])) Rows Removed by Filter: 9795 Buffers: shared hit=372 -> Subquery Scan on "*SELECT* 2" (cost=0.00..1720.44 rows=46693 width=20) (actual time=0.009..12.460 rows=46647 loops=1) Output: "*SELECT* 2".date, "*SELECT* 2".user_id Buffers: shared hit=641 -> Seq Scan on ipfs_pinning_facility.stats_user_daily_counters stats_user_daily_counters_1 (cost=0.00..1253.51 rows=46693 width=20) (actual time=0.009..8.209 rows=46647 loops=1) Output: stats_user_daily_counters_1.user_id, stats_user_daily_counters_1.date Filter: (stats_user_daily_counters_1.type <> 'shares_viewed_by_others'::text) Rows Removed by Filter: 2354 Buffers: shared hit=641 -> Hash (cost=1583.59..1583.59 rows=200 width=16) (actual time=1411.250..1411.251 rows=71228 loops=1) Output: all_users.id Buckets: 131072 (originally 1024) Batches: 2 (originally 1) Memory Usage: 3073kB Buffers: shared hit=285995 read=1964, temp read=100 written=717 -> HashAggregate (cost=1581.59..1583.59 rows=200 width=16) (actual time=1383.986..1398.270 rows=71228 loops=1) Output: all_users.id Group Key: all_users.id Batches: 5 Memory Usage: 4161kB Disk Usage: 1544kB Buffers: shared hit=285995 read=1964, temp read=100 written=560 -> CTE Scan on all_users (cost=0.00..1405.86 rows=70293 width=16) (actual time=0.102..1351.241 rows=71228 loops=1) Output: all_users.id Buffers: shared hit=285995 read=1964, temp written=296 SubPlan 2 -> Aggregate (cost=1777.05..1777.06 rows=1 width=8) (actual time=20.197..20.197 rows=1 loops=231) Output: count(DISTINCT all_users_1.id) Buffers: temp read=68607 written=1 -> CTE Scan on all_users all_users_1 (cost=0.00..1757.33 rows=7888 width=16) (actual time=0.883..10.874 rows=27239 loops=231) Output: all_users_1.id, all_users_1.registered_at, all_users_1.deleted_at Filter: ((all_users_1.registered_at < (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date)) AND ((all_users_1.deleted_at IS NULL) OR (all_users_1.deleted_at > (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date)))) Rows Removed by Filter: 43989 Buffers: temp read=68607 written=1 SubPlan 3 -> Aggregate (cost=1815.90..1815.91 rows=1 width=8) (actual time=11.215..11.215 rows=1 loops=231) Output: count(DISTINCT all_users_2.id) Buffers: temp read=68607 -> CTE Scan on all_users all_users_2 (cost=0.00..1757.33 rows=23431 width=16) (actual time=11.009..11.150 rows=231 loops=231) Output: all_users_2.id, all_users_2.registered_at, all_users_2.deleted_at Filter: (all_users_2.deleted_at < ((((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date) + '7 days'::interval)) Rows Removed by Filter: 70997 Buffers: temp read=68607 SubPlan 4 -> Aggregate (cost=1933.94..1933.95 rows=1 width=8) (actual time=14.515..14.515 rows=1 loops=231) Output: count(DISTINCT all_users_3.id) Buffers: temp read=68607 -> CTE Scan on all_users all_users_3 (cost=0.00..1933.06 rows=351 width=16) (actual time=2.264..14.424 rows=308 loops=231) Output: all_users_3.id, all_users_3.registered_at, all_users_3.deleted_at Filter: ((all_users_3.registered_at >= (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date)) AND (all_users_3.registered_at < ((((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date) + '7 days'::interval))) Rows Removed by Filter: 70920 Buffers: temp read=68607 SubPlan 5 -> Aggregate (cost=1933.94..1933.95 rows=1 width=8) (actual time=6.556..6.556 rows=1 loops=231) Output: count(DISTINCT all_users_4.id) Buffers: temp read=68607 -> CTE Scan on all_users all_users_4 (cost=0.00..1933.06 rows=351 width=16) (actual time=6.441..6.547 rows=5 loops=231) Output: all_users_4.id, all_users_4.registered_at, all_users_4.deleted_at Filter: ((all_users_4.deleted_at >= (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date)) AND (all_users_4.deleted_at < ((((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date) + '7 days'::interval))) Rows Removed by Filter: 71223 Buffers: temp read=68607 Planning Time: 0.612 ms Execution Time: 13615.054 ms
PSQL / SQL: Is it possible to further optimize this query with requiring write access to the database?
An obvious optimization is to eliminate redundant table scans. There isn't any need in preprocessed to query from all_users more than once. The following query uses COUNT with FILTER to gather the same statistics: WITH datetable AS (SELECT GENERATE_SERIES( DATE_TRUNC('week', (SELECT MIN(created_at) FROM org_accounts.deleted_users)), DATE_TRUNC('week', NOW()), '1 week'::INTERVAL )::DATE AS week_start), all_users AS (SELECT id, registered_at, NULL AS deleted_at FROM org_accounts.users WHERE status = 'active' AND org_accounts.__user_is_qa(id) <> 'Y' AND email NOT LIKE '%@org%' UNION ALL SELECT id, created_at AS registered_at, deleted_at FROM org_accounts.deleted_users WHERE deleter_id = id AND email NOT LIKE '%@org%'), weekly_activity AS (SELECT DATE_TRUNC('week', date)::DATE AS week_start, COUNT(DISTINCT user_id) AS weekly_active_users FROM (SELECT user_id, date FROM org_storage_extra.stats_user_daily_counters WHERE type IN ('created_file', 'created_folder', 'created_secure_fetch') UNION ALL SELECT user_id, date FROM ipfs_pinning_facility.stats_user_daily_counters WHERE type <> 'shares_viewed_by_others') activity_ids_dates WHERE EXISTS(SELECT 1 FROM all_users WHERE id = user_id) GROUP BY week_start), preprocessed AS (SELECT week_start, us.actual_users, us.cumulative_churned_users, us.weekly_new_users, us.weekly_churned_users, COALESCE(weekly_active_users, 0) AS weekly_active_users FROM datetable dt CROSS JOIN LATERAL (SELECT COUNT(DISTINCT u.id) FILTER (WHERE u.registered_at < dt.week_start AND (u.deleted_at IS NULL OR u.deleted_at > dt.week_start)) AS actual_users, COUNT(DISTINCT u.id) FILTER (WHERE u.deleted_at < dt.week_start + '1 week'::INTERVAL) AS cumulative_churned_users, COUNT(DISTINCT u.id) FILTER (WHERE u.registered_at >= dt.week_start AND u.registered_at < dt.week_start + '1 week'::INTERVAL) AS weekly_new_users, COUNT(DISTINCT u.id) FILTER (WHERE u.deleted_at >= dt.week_start AND u.deleted_at < dt.week_start + '1 week'::INTERVAL) AS weekly_churned_users FROM all_users u WHERE u.registered_at < dt.week_start + '1 week'::INTERVAL OR (u.deleted_at >= dt.week_start AND u.deleted_at < dt.week_start + '1 week'::INTERVAL)) us LEFT JOIN weekly_activity USING (week_start) ORDER BY week_start DESC) SELECT week_start AS for_week_of, actual_users + cumulative_churned_users AS cumulative_users, cumulative_churned_users, cumulative_churned_users::FLOAT / NULLIF((actual_users + cumulative_churned_users)::FLOAT, 0) AS cumulated_churn_rate, actual_users, weekly_new_users, weekly_churned_users, weekly_active_users, weekly_churned_users::FLOAT / NULLIF(actual_users::FLOAT, 0) AS weekly_churn_rate FROM preprocessed; There are probably other optimizations possible, but this one was immediately evident.
76378322
76378801
I cannot work out how to convert an int to a generic type containing complex128. Here is an example which doesn't compile unless the complex128 is commented out: package main import "fmt" type val interface { int64 | float64 | complex128 } func f[V val](a, b V) (c V) { q := calc() return a * b * V(q) } func calc() int { // lengthy calculation that returns an int return 1 } func main() { fmt.Printf("%v\n", f(int64(1), int64(2))) } This is simplified from a much larger calculation. I've tried using a switch but every syntax I have attempted seems to meet resistance of one kind or another. How can I multiply a and b with an integer? I have tried using a switch on the type of the return variable such as any(c).(type) but for example if I have case complex128: then it refuses to allow the complex builtin since it doesn't return a V. Without the complex128 the above will compile.
How can I convert an int to a generic type containing complex128 in Go?
This one works but it needs to list every type in the switch statement: func f[V val](a, b V) (c V) { q := calc() var temp any switch any(c).(type) { case complex128: temp = complex(float64(q), 0) case int64: temp = int64(q) default: temp = float64(q) } return a * b * (temp.(V)) }
76378721
76378829
enter image description here in wordpress and woocommerce Plugin is there anyway to hide "Display Cart" button in wordpress mini card widget ? i can hide "checkout" button individually but it seems theres no Special Css Class Fot "Display Card" buttun. ?!?!
Hide 'Display Cart' button in WooCommerce mini cart widget
you can try this add_action( 'woocommerce_widget_shopping_cart_buttons', 'bbloomer_remove_view_cart_minicart', 1 ); function bbloomer_remove_view_cart_minicart() { remove_action( 'woocommerce_widget_shopping_cart_buttons', 'woocommerce_widget_shopping_cart_button_view_cart', 10 ); } OR .widget .woocommerce-mini-cart__buttons a:not(.checkout) { display: none; }
76378661
76378944
I'm looking for the fastest way to parse a hex string representing a ulong into a uint keeping as many leading digits as a uint can handle and discarding the rest. For example, string hex = "0xab54a9a1df8a0edb"; // 12345678991234567899 Should output: uint result = 1234567899; I can do this by simply parsing the hex into a ulong, getting the digits using ToString and then just taking as many of them as would fit into uint without overflowing but I need something much faster. Thanks. C# code preferred but any would do.
The fastest way to convert a UInt64 hex string to a UInt32 value preserving as many leading digits as possible, i.e. truncation
For decimal truncation, all the high bits of the hex digit affect the low 9 or 10 decimal digits, so you need to convert the whole thing. Is there an algorithm to convert massive hex string to bytes stream QUICKLY? asm/C/C++ has C++ with SSE intrinsics. I commented there with some possible improvements to that, and to https://github.com/zbjornson/fast-hex . This could be especially good if you're using SIMD to find numeric literals in larger buffers, so you might have the hex string in a SIMD register already. (Not sure if SIMDJSON does that.) Hex-string to 64-bit integer is something SIMD certainly can speed up, e.g. do something to map each digit to a 0-15 integer, combine pairs of bytes to pack nibbles (e.g. with x86 pmaddubsw), then shuffle those 8-bit chunks to the bottom of a register. (e.g. packuswb or pshufb). x86 at least has efficient SIMD to GP-integer movq rax, xmm0, although the ARM equivalent is slow on some ARM CPUs. (Getting a speedup from SIMD for ASCII hex -> uint is much easier if your strings are fixed-length, and probably if you don't need to check for invalid characters that aren't hex digits.) Decimal truncation of u64 (C# ulong) to fit in u32 (C# uint) Modulo by a power of 10 truncates to some number of decimal digits. (uint)(x % 10000000000) works for some numbers, but 10000000000 (1e10 = one followed by 10 zeros) is larger than 2^32-1. Consider an input like 0x2540be3ff (9999999999). We'd get (uint)9999999999 producing 1410065407 = 0x540be3ff (keeping the low 32 bits of that 34-bit number.) So perhaps try modulo 1e10, but if it's too big for u32 then modulo 1e9. ulong tendigit = x % 10000000000; // 1e10 uint truncated = tendigit <= (ulong)0xffffffff ? tendigit : (x % 1000000000); // % 1e9 keeps 9 decimal digits If this isn't correct C# syntax or the literals need some decoration to make them ulong (like C 10000000000uLL for good measure), please let me know. It's probably at least as efficient to just modulo the original number two different ways than to try to get the leading decimal digit of x % 1e10 and subtract it or whatever. The asm is going to need two 64-bit multiplicative inverse constants, and starting from the original number again keeps critical-path latency shorter for out-of-order exec if branch prediction predicts that it needs to calculate the nine-digit truncation. Binary truncation @Matthew Whited deleted his answer (due to a bug in the decimal truncation part), but his binary truncation part based on substrings of the original hex input could perhaps be more efficient in some cases than doing the full conversion and then casting to a narrower type or masking with AND. If you want the last 8 bytes of the hex string uint.Parse(hex[^8..],NumberStyles.HexNumber) If you want the first 8 bytes uint.Parse(hex[2..10], NumberStyles.HexNumber);
76384304
76384361
I am faced a problem. I have a project which is in firebase. I have used there firebase Authenticate, Firebase realtime database, Firebase function and some more. Now I have changed my decision. I want to make my own server where I will set up and manage everything. So that I want to backup my project to move all data to other framework like spring boot project. In this situation how can I get the whole project? User Auth data, Firebase Realtime database, Firestore etc.
How to backup a full project of firebase
You'll have to write code or use the CLI to query all of the data you want, and write it to a place you want. Firebase does not provide a tool to do all this automatically for an entire project. You will need to deal with each product's data separately. You can use the Firebase Admin SDK or the Firebase CLI to access data from the products you listed. See also: Is it possible to backup Firebase DB? https://firebase.google.com/docs/firestore/manage-data/export-import https://firebase.google.com/docs/cli/auth
76378577
76378945
I am trying to build a simple language translating program. I imported the 'language_converter' gem to aid with this goal. I wrote the following code: require 'language_converter' class Translator def initialize @to = 'ja'; @from = 'en'; end def translate text lc(text, @to,@from) end end #puts lc('welcome to Japan!', 'ja','en'); t = Translator.new p t.translate('welcome to Japan!'); This code results in the error: undefined method 'lc' for #<Translator:0x0000000101167a90 @to="ja", @from="en"> (NoMethodError) However, when i uncomment the code on line 15, ruby can access the lc method and return some japanese. Does anyone know why the method is 'defined' outside of the class but not inside? Edit: the language-converter gem is not my own. also, I cannot find the source code on its homepage. I have also tried adding two semicolons before the lc method like so: ::lc(text, @to,@from). This results in the error: syntax error, unexpected local variable or method, expecting constant
Why does ruby recognise a method outside of a class, but not inside?
The gem is more than 10 years old and only has one method. And that method is implemented as a class method. You are properly better off with just rewriting that method in your application with a modern Ruby syntax and proper error handling. For reference, this it how lib/language_converter.rb in the gem looks like: require 'net/http' require 'rubygems' require "uri" require 'json' class UnSupportedLanguage < RuntimeError def initialize(message='') @msg = "not supported." end end def self.lc( text, to, from='en' ) begin uri = URI.parse("http://mymemory.translated.net/api/get") response = Net::HTTP.post_form(uri, {"q" => text,"langpair"=>"#{from.to_s.downcase}|#{to.to_s.downcase}", "per_page" => "50"}) json_response_body = JSON.parse( response.body ) if json_response_body['responseStatus'] == 200 json_response_body['responseData']['translatedText'] else puts json_response_body['responseDetails'] raise StandardError, response['responseDetails'] end rescue UnSupportedLanguage raise UnSupportedLanguage.new rescue => err_msg puts "#{err_msg}" end end
76384270
76384365
In this example, I want the purple rectangle to change its opacity to 100% regardless of the value of the parent. I tried using all: unset/initial and !important but it doesn't seem to work. .rect { width: 500px; height: 600px; margin-top: 200px; margin-left: 300px; background-color: black; /* this V */ opacity: 37%; z-index: -1; } .rect1 { all: unset; position: absolute; z-index: 10; width: 259px; height: 300px; margin-top: 500px; margin-left: 50px; background-color: purple; /* to this V */ opacity: 100% !important; } <div class="rect"> <div class="rect1"></div> </div>
How to override parent's styles in css?
So like Haworth pointed out, using opacity on the element itself brings all children under the influence of the pixelshading used to make the opacity effect. If you want to get the same effect while retaining your html structure I'd recommend a different approach for the same result using RGBA or hex with an alpha channel on the background-color property directly. See example below. body { height: 100%; width: 100%; background: url(https://picsum.photos/800) no-repeat; background-size: cover; } .rect { width: 500px; height: 600px; margin-top: 200px; margin-left: 300px; background-color: rgba(0,0,0,.37); /* this V opacity: 37%;*/ z-index: -1; } .rect1 { position: absolute; z-index: 10; width: 259px; height: 300px; margin-top: 500px; margin-left: 50px; background-color: purple; /* to this V */ opacity: 100% !important; } <div class="rect"> <div class="rect1"></div> </div>
76378347
76378974
I'm running a bat file in windows. I'm trying to generate a log file of all the output that appears in the command prompt, to have as a document. Note, Not a log file of the contents of the bat file but of the command prompt that it outputs. How would I do this? Thanks
How to generate a log file of the windows prompt when I run a bat file
Redirecting to output is done by using > or appending to file using >> for batch-file, we typically call them. (call script.cmd)2>&1>"logfile.log" or append (call script.cmd)2>&1>>"logfile.log" Note, 2>&1 2>&1 is redirecting the stderr stream 2 to the stdout stream 1, it is important here, seeing as you said you want to log all of the output results to logfile. So that should also give the clue that you can in fact redirect success (stdout) results to one file and failures (stderr) to another, i.e (call script.cmd) 1>"Output.log" 2>"Errors.log" Note, some commands and executables sends everything to the stdout stream and nothing to stderr, example ping.exe.
76384255
76384399
I need to find out the value of "name" inside on the obj object. How can I find it without function invocation? I wanna use just obj.isActive not obj.isActive() let obj = { name: "X Æ A-12 Musk", isActive: function () { return this.name.length > 4; }, }; // and after a while I need to check if is active: console.log(obj); // { // name: 'X Æ A-12 Musk', // isActive: [Function: isActive] <--------- NOT COOL ! // } If use an IFEE: let obj = { name: "X Æ A-12 Musk", isActive: (function () { return this.name.length > 4; })(), }; I get: return this.name.length > 4; ^ TypeError: Cannot read properties of undefined (reading 'length')
calculate an object property based on the value of another property of the same object
If you do not want to have to call isActive as a function, you can use a getter. const obj = { name: "X Æ A-12 Musk", get isActive () { return this.name.length > 4; }, }; console.log(obj.isActive);
76384220
76384400
Source Data:: json_data = [{"studentid": 1, "name": "ABC", "subjects": ["Python", "Data Structures"]}, {"studentid": 2, "name": "PQR", "subjects": ["Java", "Operating System"]}] Hardcoded_Val1 = 10 Hardcoded_Val2 = 20 Hardcoded_Val3 = str(datetime.datetime.now()) Need to create a flat .txt file with the below data. ID,DEPT,"studentid|name|subjects",execution_dt 10,20,"1|ABC|Python,Data Structures",2023-06-01 10,20,"2|PQR|Java,Operating System",2023-06-01 I am very new in python. Have already tried to figure it out to achieve it but couldn't. Your help will be much appreciated. import datetime import pandas as pd import json json_data = [{"studentid": 1, "name": "ABC", "subjects": ["Python", "Data Structures"]}, {"studentid": 2, "name": "PQR", "subjects": ["Java", "Operating System"]}] Hardcoded_Val1 = 10 Hardcoded_Val2 = 20 Hardcoded_Val3 = str(datetime.datetime.now()) profile = str(Hardcoded_Val1) + ',' + str(Hardcoded_Val2) + ',"' + str(json_data) + '",' + Hardcoded_Val3 print(profile) #data = json.dumps(profile, indent=True) #print(data) data_list = [] for data_info in profile: data_list.append(data_info.replace(", '", '|')) data_df = pd.DataFrame(data=data_list) data_df.to_csv(r'E:\DataLake\api_fetched_sample_output.txt', sep='|', index=False, encoding='utf-8')
Code to format JSON data and append hardcoded data to create a flat .txt file
I would bypass using pandas for this and just build the string manually primarily using a list comprehension and join(). import datetime import csv Hardcoded_Val1 = 10 Hardcoded_Val2 = 20 Hardcoded_Val3 = str(datetime.date.today()) json_data = [ {"studentid": 1, "name": "ABC", "subjects": ["Python", "Data Structures"]}, {"studentid": 2, "name": "PQR", "subjects": ["Java", "Operating System"]} ] csv_data = [] for row in json_data: keys = "|".join(row.keys()) values = "|".join([ ",".join(value) if isinstance(value, list) else str(value) for value in row.values() ]) csv_data.append(dict([ ("ID", Hardcoded_Val1), ("DEPT", Hardcoded_Val2), (keys, values), ("execution_dt", Hardcoded_Val3) ])) with open("out.csv", "w", encoding="utf-8", newline="") as file_out: writer = csv.DictWriter(file_out, fieldnames=list(csv_data[0].keys())) writer.writeheader() writer.writerows(csv_data) This will produce a file with the following contents: ID,DEPT,studentid|name|subjects,execution_dt 10,20,"1|ABC|Python,Data Structures",2023-06-02 10,20,"2|PQR|Java,Operating System",2023-06-02
76380911
76381092
Im making an application multi Language. I want to build typing as strict and simpel as possible. My Code is the following: //=== Inside my Hook: ===// interface ITranslation { [key:string]:[string, string] } const useTranslator = (translations:ITranslation) => { const language = useLanguage() // just getting the language setting from another hook const translate = (key:keyof typeof translations) => { // mapping and returning the right translation } return translate; } //=== Inside the component: ===// const translation:ITranlation = { "something in english": [ "something in german", "something in spanish" ], "anotherthing in english": ["anotherthing in german", "anotherthing in spanish"] } const translate = useTranslation(translation) return( <Text>{translate("something in english")}</Text> ) What i want to achieve: When passing the translation Object, with Dynamic Keys to the Hook: useTranslation(translations), there should be a typecheck validating, that both languages are provided (any property has an Array with 2 Strings) When using the translate function (inside the Text component) typescript should bring an error, if a key is not matching the Dynamic Keys inside the translations object. So this should throw an error: tranlate("not a key in object") But i can't get it to work properly. I can either set the translations object as const, but then there is no typecheck when passing the object to the Hook. Or i set it as shown above with translation:ITranslation but then there is no typechecking for the parameter in the ´translate´ function inside the component. Is it possible to achive that? (If yes, how?) Thanks in advance!
Expect function Parameter to be Key of Object with Dynamic Properties
This solution will work only for Typescript >= 4.9 since it uses the satisfies operator introduced in the 4.9. Adding as const is the approach we will go with, and satisfies will allow us to type-check it. const translation = { 'something in english': ['something in german', 'something in spanish'], 'anotherthing in english': ['anotherthing in german', 'anotherthing in spanish'], } as const satisfies ITranslation; Since we added as const the values in the ITranslation will be readonly [string, string], thus we have to update the ITranslation to the following: interface ITranslation { [key: string]: readonly [string, string]; } Next, we need to add a generic parameter to useTranslator so it works over the specific instance of ITranslation. The same goes for the translate function. It should accept the generic parameter for the key of ITranslation and return the value for that specific key: const useTranslator = <T extends ITranslation>(translations: T) => { const language = useLanguage(); // just getting the language setting from another hook const translate = <K extends keyof T>(key: K): T[K][number] => { // return retrieved value }; return translate; }; Since it is not asked in the question translate will return a union of the translations for the specific key, which is achieved by T[K][number] Usage: const Component = () => { const translate = useTranslator(translation); // "something in german" | "something in spanish" const case1 = translate('something in english'); // "anotherthing in german" | "anotherthing in spanish" const case2 = translate( 'anotherthing in english'); return null; }; playground
76381023
76381114
I have added a script for showing a div before different divs in different screen size. This is the code I used: jQuery(function($){ jQuery(document).ready(function(){ jQuery(window).on('resize', function(){ if(jQuery(window).width() <= 1024){ jQuery( ".checkout.woocommerce-checkout .woocommerce-shipping-fields__wrapper" ).insertBefore( ".checkout.woocommerce-checkout .flux-step.flux-step--2 .flux-checkout__shipping-table" ); } else if(jQuery(window).width() >= 1025){ jQuery( ".checkout.woocommerce-checkout .woocommerce-shipping-fields__wrapper" ).insertBefore( ".checkout.woocommerce-checkout .flux-checkout__content-right #order_review" ); } }); }); }); But the code is not working when I open the site. It only works if I resize the screen. May be due to the resize function is used. Can anyone please guide me how to make it so that it'll show the 2 conditions even without resizing the screen and one'll work above 1024px and another below 1024px. TIA
jquery above and below screen sizes
Just put your code in a function and call it on the document ready: $(function(){ resize(); $(window).on('resize', resize); function resize(){ $( ".checkout.woocommerce-checkout .woocommerce-shipping-fields__wrapper" ) .insertBefore( $(window).width() <= 1024 ? ".checkout.woocommerce-checkout .flux-step.flux-step--2 .flux-checkout__shipping-table" : ".checkout.woocommerce-checkout .flux-checkout__content-right #order_review" ); } });
76378620
76378997
I am trying to compare the QuickCheck library to the SmallCheck one. In SmallCheck I can reach particular value manipulating depth parameter. In QuickCheck: >a<-generate (replicateM 10000 arbitrary) :: IO [Int] >length a 10000 >maximum a 30 and my question then is: why are 10,000 "random" ("arbitrary") integers limited by 30?! I expected to see more "widely" distributed values within the range 0..10,000, maybe the maximum value close to 5,000.
How is arbitrary distributed for Int? Why is it limited by so small values?
The documentation contains a clue: The size passed to the generator is always 30 By default QuickCheck works by starting with 'easy' or 'small' inputs to see if it can find counterexamples with those. Only if it finds no problems with the small inputs does it gradually widen the range of generated input. The size value (which runs implicitly throughout everything that QuickCheck does) is the value that controls this behaviour. When you run QuickCheck (e.g. with quickCheck) it automatically increases the size as it goes. You're not really supposed to use the generate function directly, but if you do, you can resize it: ghci> b <- generate (replicateM 10000 (resize 60 arbitrary)) :: IO [Int] ghci> maximum b 60 That said, how are you supposed to use QuickCheck? The documentation describes quickCheck along with a multitude of variations you can use to evaluate properties. Personally, I integrate my QuickCheck properties with a unit testing framework with testProperty. You can see examples here: Property-based testing is not the same as partition testing.
76384387
76384457
I have the following simple function (make) that calls the handle function and is supposed to retry a number of times whenever that function throws. If the retries are exhausted, the make function should throw the error. const handle = async (): Promise<string> => 'hi'; const make = async (): Promise<string> => { const MAX_RETRIES = 2; for (let idx = 0; idx <= MAX_RETRIES; idx++) { try { return await handle(); } catch (err) { if (idx < MAX_RETRIES) { continue; } else { throw err; } } } }; I'm using TypeScript, which is complaining because the return type doesn't include undefined: Function lacks ending return statement and return type does not include 'undefined'. For reference, this is the TS Playground for the code above. I'm looking for guidance on how to handle the return type for the function. Note that: I don't want to change my tsconfigs (currently set to strict) I don't want to modify the return type to Promise<string | undefined> My understanding is that the make function can only either return a string (inside the try block) or throw an error once the retries have been exhausted. If that's the case then where does the undefined that TS is asking for comes from? Am I missing something?
How can I resolve the TypeScript error 'Function lacks ending return statement and return type does not include 'undefined'' in my code?
My understanding is that the make function can only either return a string (inside the try block) or throw an error once the retries have been exhausted. I'm fairly sure you're right, but TypeScript can't quite follow logic that complex, so it (incorrectly, I think) sees a path through the function that doesn't do an explicit return and so implicitly returns undefined (wrapped in a promise). You can solve it in a few ways: Add a return ""; at the end with a comment noting it'll never happen. (Blech.) Add a throw new Error("Logic error, this will never be reached."); at the end. Rewrite the function to make the final attempt more obviously a return-or-throw situation by using < instead of <= and then repeating the return await handle(); at the end. (Not great to have to repeat it, but it's very simple.) I don't think #1 or #2 need examples, but here's what #3 might look like: const make = async (): Promise<string> => { const MAX_RETRIES = 2; for (let idx = 0; idx < MAX_RETRIES; idx++) { try { return await handle(); } catch (err) { continue; // I guess technically we don't need this, since // the loop doesn't do anything else } } return await handle(); }; For me, #2 is the winner (or jcalz's rewrite), but any of them will make TypeScript happy, it's really a style choice.
76384356
76384460
I am new to promql. So not sure if promql supports my requirement or not. max_over_time(cbnode_systemstats_cpu_utilization_rate{instance="a",node="a"}[6h]) This above query gives me result of max cpu utilization in past 6 hr for instance a single instnace a. However I want a query which fetches all metrics for all the instances where instance and node has same value. Something similar to below: max_over_time(cbnode_systemstats_cpu_utilization_rate{instance = node}[6h])
How can i get all the metrics where two label have same values using promql?
There is no easy elegant way to do that. But you can utilize label_replace, logic of label matching for binary operations and a pinch of ingenuity. label_replace(cbnode_systemstats_cpu_utilization_rate{}, "pseudoid", "$1", "instance", "(.*)") == label_replace(cbnode_systemstats_cpu_utilization_rate{}, "pseudoid", "$1", "node", "(.*)") Here we add to LHS metric new label called pseudoid with value of instance, and same for RHS, but with value of node. Result will be returned only if all labels are the same, and in turn it will mean that instance == pseudoid == node. Demo of similar query can be seen here. Notice that since it is not the instant vector selector, you'll need to use subquery syntax to pass it into max_over_time. You resulting query should look like this: max_over_time( ( label_replace(cbnode_systemstats_cpu_utilization_rate{}, "pseudoid", "$1", "instance", "(.*)") == label_replace(cbnode_systemstats_cpu_utilization_rate{}, "pseudoid", "$1", "node", "(.*)") )[6h:] )
76381015
76381115
This is ItemManufactureController file class ItemManufactureController extends Controller { public function index(){ return view('item_manufacture'); } // Save category data into the database public function store(Request $request){ $newManufacture = new ItemManufacture; $newManufacture->name = $request->input('txtManufactureName'); $newManufacture->status = $request->input('status', 'available'); dd($newManufacture); $newManufacture->save(); return redirect('/item_manufacture'); } } This is item_manufacture.blade.php file {{--this page add to layout --}} @extends('layout.layout_01') {{--identity the content form the layout--}} @section('content') <div class="container"> <div class="row"> <div class="col-md-4"></div> <div class="col-md-4"> <div class="card"> <h5 class="card-header">Add Item Manufacture Details</h5> <div class="card-body"> <div class="input-field p-3"> <label for="txtManufactureName">Manufacture Name :</label> <div class="col-sm-8 p-2"> <input type="text" placeholder="Item Name" name="txtManufactureName" id="txtManufactureName"> </div> </div> <div class="input-field p-3"> <div class="col-sm-8 p-2"> </div> </div> <a href="/save_manufacture" class="btn btn-primary mb-2" id="btnAdd">ADD</a> </div> </div> </div> <div class="col-md-4"></div> </div> </div> @endsection This is route file //save manufacture Route::get('/save_manufacture', [ItemManufactureController::class, 'store'])->name('saveManufacture'); Route::get('/item_manufacture', function (){ return view('pages.item_manufacture'); }); This is Model file class ItemManufacture extends Model { use HasFactory; // public $timestamps=false; protected $connection = 'mysql'; protected $primaryKey = 'id'; protected $table = 'item_manufacture'; protected $fillable = [ 'name', 'status']; } when add data into the form and click "ADD" button array comes null value I used Laravel 8 framework, when I add data into the input field of the item_manufacture form, data will not pass the array. If there any error of my code, Please correct it. How to save data and get values from the input fields using Laravel framework?
How to Save data to the Database using Laravel 8?
Please make your route as post since you're storing the data and change your route in chaining name() method as saveManufacture.store Route::post('/save_manufacture', [ItemManufactureController::class, 'store'])->name('saveManufacture.store'); And in your blade file wrap your inputs inside form tag and set named route in your action. And then replace a tag (anchor tag) with input type submit since we have added action in our form tag. so your blade file will look this. {{--this page add to layout --}} @extends('layout.layout_01') {{--identity the content form the layout--}} @section('content') <div class="container"> <div class="row"> <div class="col-md-4"></div> <div class="col-md-4"> <div class="card"> <h5 class="card-header">Add Item Manufacture Details</h5> <div class="card-body"> <form action="{{ route('saveManufacture.store') }}" method="post"> <div class="input-field p-3"> <label for="txtManufactureName">Manufacture Name :</label> <div class="col-sm-8 p-2"> <input type="text" placeholder="Item Name" name="txtManufactureName" id="txtManufactureName"> </div> </div> <div class="input-field p-3"> <div class="col-sm-8 p-2"> </div> </div> <input type="submit" class="btn btn-primary mb-2" id="btnAdd" value="ADD"> </form> </div> </div> </div> <div class="col-md-4"></div> </div> </div> @endsection Now you'll able to get the request param in your store() function, please try to debug dd($request->post());
76378362
76378998
I am working with a chrome extension which uses webpack to build. To build I use this : cross-env NODE_ENV=production yarn webpack -c webpack.config.js --mode production webpack.config.js const HTMLPlugin = require('html-webpack-plugin'); const CopyPlugin = require('copy-webpack-plugin'); const path = require('path'); const UglifyJSPlugin = require('uglifyjs-webpack-plugin'); const BrowserExtensionPlugin = require("extension-build-webpack-plugin"); module.exports = { entry: { options: './src/options.tsx', popup: './src/popup.tsx', content: './src/content.tsx', background: './src/background.tsx', }, output: { filename: '[name].js', path: path.resolve(__dirname, 'build'), }, resolve: { extensions: ['.js', '.jsx', '.ts', '.tsx', '.css'], modules: [path.resolve(__dirname, 'src'), 'node_modules'], alias: { react: 'preact/compat', 'react-dom': 'preact/compat', }, }, module: { rules: [ { test: /\.(tsx|jsx|ts|js)x?$/, exclude: /node_modules/, use: [ { loader: 'babel-loader', options: { presets: [ "@babel/preset-env", "@babel/preset-react", "@babel/preset-typescript", ], }, }, ], }, { test: /\.svg$/, use: ['@svgr/webpack'], }, ], }, plugins: [ new HTMLPlugin({ chunks: ['options'], filename: 'options.html', title: 'Options page title', }), new HTMLPlugin({ chunks: ['popup'], filename: 'popup.html', }), new CopyPlugin([ { from: './src/_locales/', to: './_locales' }, { from: './src/assets', to: './assets' }, { from: './src/manifest.json', to: './manifest.json' }, ]), new BrowserExtensionPlugin({devMode: false, name: "build/chromium.zip", directory: "src", updateType: "minor"}), ], optimization: { minimizer: [ new UglifyJSPlugin({ uglifyOptions: { compress: { drop_console: true, drop_debugger: true, } } }) ] }, mode: 'production', stats: 'minimal', performance: { hints: false, maxEntrypointSize: 512000, maxAssetSize: 512000 } }; manifest.json: { "manifest_version": 3, "name": "__MSG_appName__", "description": "__MSG_appDesc__", "default_locale": "en", "version": "0.1.0", .... .... } If I run cross-env NODE_ENV=production yarn webpack -c webpack.config.js --mode production again it increments the version from 0.1.0 to 0.2.0 automatically not just in build folder but in src folder as well. How can I prevent this auto increment functionality. I suspect it's due to one of the webpack plugins I am using.
Prevent webpack from auto-incrementing project version
This is caused by extension-build-webpack-plugin which you really shouldn't have struggled to find, as there's a total of 4 plugins there to look at. No, it does not offer any method of avoiding version bumps. You can only configure if you want it to bump the major or minor version number, defaulting to minor. It's a really weird library to be using, it gets few downloads and is unmaintained. There's probably better alternatives out there.
76384281
76384463
I want to parse some data that's in a string format. Anything enclosed in parenthesis in the string to parse should be replaced with itself run through a function. This is what I want: function foo(str) { return parseInt(str) + 1; // Example function, not actually what the function will be } function parse(str) { // everything in str that is enclosed in parenthesis should be replaced with itself ran through foo(); // Example // Input: "My name is foo and I am (0) year old." // Output: "My name is foo and I am 1 year old." // "(0)" has been replaced with the result of foo("0") } I have thought up a couple bad workarounds, but I want something more robust. For example: function parse(str) { // Input: "My name is foo and I am (0) year old." str = str.replaceAll("(", "${foo('"); str = str.replaceAll(")", "')}"); str = "`" + str + "`" // Here str will be "`My name is foo and I am ${foo(0)} year old.`" // And I can use eval() or something to treat it like I've typed that } This, however, is kind of a bad way of doing it. EDIT: I tested it, it works, but it is quite vulnerable. I can't think of anything else and I'm not very good with RegEx. (although I'd accept a solution using it)
Javascript: run specific parts of strings through a function
Here's what I would do. I would match the string with a RegEx that would match anything inside parenthesis in the string. With that, I would then use str.replaceAll() to replace the matched string with the result of the foo() function. const regex = /\((\d*)\)/gm; function foo(str) { return parseInt(str) + 1; } function parse(str) { // Loop all match the regex find in the string let m; while ((m = regex.exec(str)) !== null) { // This is necessary to avoid infinite loops with zero-width matches if (m.index === regex.lastIndex) { regex.lastIndex++; } // Replace all instance of the match with the operation of the match str = str.replaceAll(m[0], foo(m[1])) } return str; } let p = parse('My name is foo and I am (0) year old and I want (54) apples'); // The result will be: My name is foo and I am 1 year old and I want 55 apples With that, you won't need to use eval() as it potentially pose a risk for your application. I hope that would work for you. If I missed anything, tell me, I will edit my answer.
76381105
76381147
I have a time series every which looks like this : Time Volume every minute 2023-05-25T00:00:00Z 284 2023-05-25T00:01:00Z 421 . . . . 2023-05-27T23:58:00Z 894 2023-05-27T23:59:00Z 357 I have to make new CSV by iterating Time column finding unique date and making new columns with corresponding values of volume every minute. For example desired output: Date min1 min2 ... min1440 2023-05-25 284 421 ... 578 2023-05-26 512 645 ... 114 2023-05-27 894 357 ... 765 i am able to fetch unique dates but after that i am clueless. please find my sample codes: import pandas as pd train_data = pd.read_csv('date25to30.csv') print(pd.to_datetime(train_data['time']).dt.date.unique())
Find unique date from existing dataframe and make a new CSV with corresponding column values
First add parameter parse_dates to read_csv for convert Time column to datetimes: train_data = pd.read_csv('date25to30.csv', parse_dates=['Time']) Then create minutes by converting HH:MM:SS to timedeltas by to_timedelta and Series.dt.total_seconds, divide 60 and add 1 because python count from 0: minutes = (pd.to_timedelta(train_data['Time'].dt.strftime('%H:%M:%S')) .dt.total_seconds() .div(60) .astype(int) .add(1)) Last pass to DataFrame.pivot_table with DataFrame.add_prefix: df = (train_data.pivot_table(index=train_data['Time'].dt.date, columns=minutes, values='Volume', aggfunc='sum').add_prefix('min')) print (df) Time min1 min2 min1439 min1440 Time 2023-05-25 284.0 421.0 NaN NaN 2023-05-27 NaN NaN 894.0 357.0
76378633
76379006
I want to hide the AppBar on scroll. The search icon is hidden properly and also the opacity decreases on scroll. But for the title, it is not working. import 'package:flutter/material.dart'; import 'package:vet_mobile/screens/chat.dart'; import 'package:vet_mobile/screens/logot.dart'; class HomeScreen extends StatelessWidget { const HomeScreen({Key? key}) : super(key: key); @override Widget build(BuildContext context) { return DefaultTabController( length: 3, child: Scaffold( body: NestedScrollView( headerSliverBuilder: (BuildContext context, bool innerBoxIsScrolled) { return <Widget>[ SliverAppBar( title: Row( mainAxisAlignment: MainAxisAlignment.spaceBetween, children: [ Text( 'WhatsApp', style: TextStyle( color: Theme.of(context).textTheme.bodyLarge!.color, ), ), IconButton( onPressed: () {}, icon: Icon( Icons.search, color: Theme.of(context).textTheme.bodyLarge!.color, ), ), ], ), pinned: true, floating: true, elevation: 5, bottom: TabBar( indicatorSize: TabBarIndicatorSize.tab, indicatorWeight: 4, indicatorColor: Theme.of(context).textTheme.bodyLarge!.color, labelStyle: TextStyle(fontSize: 13, fontWeight: FontWeight.w600), labelColor: Theme.of(context).textTheme.bodyLarge!.color, unselectedLabelColor: Theme.of(context).textTheme.bodySmall!.color, dividerColor: Colors.transparent, tabs: const [ Tab(text: 'CHATS'), Tab(text: 'STATUS'), Tab(text: 'CALLS'), ], ), ), ]; }, body: const TabBarView( children: [ Center(child: LogoutScreen()), Center(child: ChatScreen()), Center(child: Text('Patient')), ], ), ), ), ); } } As we can see the opacity of the search button decreases slowly as I scroll down but not for the title. I tried using the preferred height, animation controller, but it messed up more.
Cannot properly hide the appbar title on scroll in flutter
Seems that this effect does not work when you set a custom style. Remove the fixed style setting from here: Text( 'PawCare', // remove this /*style: TextStyle( color: Theme.of(context).textTheme.bodyLarge!.color, ),*/ ), To set the style of the title text, use the titleTextStyle configuration of SliverAppBar: SliverAppBar( titleTextStyle: TextStyle( color: Theme.of(context).textTheme.bodyLarge!.color), ...
76378657
76379177
I have the following algebraic data type: data Tree a = Empty | Node a (Tree a) (Tree a) deriving (Show, Eq) Also, I have data Step = StepL | StepR deriving (Show, Eq) Now, I need a function search that takes a root of the tree a target value t ... and it must return a path of type [Step] leading to a node with value t. Also, if t is not present in the tree, search must return Nothing. Finally, the input is guaranteed to have the target value at most once. My best effort, as of now, is: searchHelper :: Eq a => a -> Tree a -> [Step] -> Maybe [Step] searchHelper _ Empty _ = Nothing searchHelper targetValue (Node nodeValue leftChild rightChild) stepsSoFar = if targetValue == nodeValue then Just stepsSoFar else if searchHelper targetValue leftChild (stepsSoFar ++ [StepL]) /= Nothing then searchHelper targetValue leftChild (stepsSoFar ++ [StepL]) else if searchHelper targetValue rightChild (stepsSoFar ++ [StepR]) /= Nothing then searchHelper targetValue rightChild (stepsSoFar ++ [StepR]) else Nothing search :: Eq a => a -> Tree a -> Maybe [Step] search targetValue root = searchHelper targetValue root [] As you can see, I call the searchHelper too often (else if searchHelper targetValue leftChild (stepsSoFar ++ [StepL]) /= Nothing then searchHelper targetValue leftChild (stepsSoFar ++ [StepL])). I need a machinery that would allow me to cache the results of searchHelper calls and use them in if ... then ... else. Q: How can I do it?
Haskell: cache result of a function in pattern matching
The use of the word cache confused me, but if I understand the question correctly, the real problem is the repeated use of the same expression. That could certainly become a readability and maintainability issue in a larger code base, so is worthwhile addressing. From the context this looks like a 'toy problem'. There's nothing wrong with that - I play with plenty of those myself to learn new stuff. The reason I mention it, though, is that from this and other clues I gather that you're still a Haskell beginner. Again: nothing wrong with that, but it just means that I'm going to skip some of the slightly more advanced Haskell stuff. Checking for Nothing or Just like in the OP is rarely idiomatic Haskell. Instead you'd use pattern-matching or (more commonly) some of the higher-level APIs for working with Maybe (such as Functor, Applicative, Monad, etc.). That said, I gather that this isn't quite what you need right now. In order to cut down on the duplication of expressions, you can use let..in syntax in Haskell: searchHelper :: Eq a => a -> Tree a -> [Step] -> Maybe [Step] searchHelper _ Empty _ = Nothing searchHelper targetValue (Node nodeValue leftChild rightChild) stepsSoFar = if targetValue == nodeValue then Just stepsSoFar else let l = searchHelper targetValue leftChild (stepsSoFar ++ [StepL]) in if l /= Nothing then l else let r = searchHelper targetValue rightChild (stepsSoFar ++ [StepR]) in if r /= Nothing then r else Nothing This enables you to 'declare' 'variables' l and r and reuse them. As my lengthy preamble suggests, this still isn't idiomatic Haskell, but I hope it adresses the immediate question.
76383893
76384507
Python OOP problem MultiKeyDict class, which is almost identical to the dict class. Creating an instance of MultiKeyDict class should be similar to creating an instance of dict class: multikeydict1 = MultiKeyDict(x=1, y=2, z=3) multikeydict2 = MultiKeyDict([('x', 1), ('y', 2), ('z', 3)]) print(multikeydict1['x']) # 1 print(multikeydict2['z']) # 3 A feature of the MultiKeyDict class should be the alias() method, which should allow aliases to be given to existing keys. The reference to the created alias should not differ from the reference to the original key, that is, the value has two keys (or more if there are several aliases) when the alias is created: multikeydict = MultiKeyDict(x=100, y=[10, 20]) multikeydict.alias('x', 'z') # add key 'x' alias 'z' multikeydict.alias('x', 't') # add alias 't' to key 'x' print(multikeydict['z']) # 100 multikeydict['t'] += 1 print(multikeydict['x']) # 101 multikeydict.alias('y', 'z') # now 'z' becomes an alias of the key 'y' multikeydict['z'] += [30] print(multikeydict['y']) # [10, 20, 30] The value must remain available by alias even if the original key was removed: multikeydict = MultiKeyDict(x=100) multikeydict.alias('x', 'z') del multikeydict['x'] print(multikeydict['z']) # 100 Keys must take precedence over aliases. If some key and alias are the same, then all operations when accessing them must be performed with the key: multikeydict = MultiKeyDict(x=100, y=[10, 20]) multikeydict.alias('x', 'y') print(multikeydict['y']) # [10, 20] I can't implement such a feature, please give me ideas how it can be done!!! multikeydict = MultiKeyDict(x=100) multikeydict.alias('x', 'z') del multikeydict['x'] print(multikeydict['z']) # 100 my code does not work with this test multikeydict = MultiKeyDict(x=100) multikeydict.alias('x', 'z') del multikeydict['x'] print(multikeydict['z']) #100 class MultiKeyDict(dict): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.aliases = {} def alias(self, key, alias): self.aliases[alias] = key def __getitem__(self, key): if key in self.aliases: key = self.aliases[key] return super().__getitem__(key) def __setitem__(self, key, value): if key in self.aliases: key = self.aliases[key] super().__setitem__(key, value) def __delitem__(self, key): if key in self.aliases: del self.aliases[key] super().__delitem__(key) multikeydict = MultiKeyDict(x=100, y=[10, 20]) multikeydict.alias('x', 'z') multikeydict.alias('x', 't') print(multikeydict['z']) multikeydict['t'] += 1 print(multikeydict['x']) multikeydict.alias('y', 'z') multikeydict['z'] += [30] print(multikeydict['y'])
Implement MultiKeyDict class in Python with alias() method for creating aliases. Existing code fails when original key is deleted. Need fix
Some remarks: As the specification says that keys should have precedence over aliases (when both exist), you should first test key membership on self before looking in aliases. Your methods first check for membership in aliases... As a value must continue to exist when a key is deleted for which there are still alias(es), I would suggest storing the values wrapped in a list (that just has that value as only member). This way you can reference that list in an alias entry. When the key is deleted, the alias will still have the reference to the list and can still act on it. Here is how that could look: class MultiKeyDict(dict): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.aliases = {} # wrap each value in a list of size 1: for key, value in self.items(): super().__setitem__(key, [value]) def alias(self, key, alias): self.aliases[alias] = super().__getitem__(key) def __getitem__(self, key): if key in self: return super().__getitem__(key)[0] return self.aliases[key][0] def __setitem__(self, key, value): if key in self: super().__getitem__(key)[0] = value elif key in self.aliases: self.aliases[key][0] = value else: super().__setitem__(key, [value]) def __delitem__(self, key): if key in self: return super().__delitem__(key) del self.aliases[key]
76381091
76381163
The scenario is the following: type Option = 'a' | 'b' | 'c' | 'd' type Question = { message: string; options: Option[]; default: Option // here's the issue } I want the default prop to be the one of the options used inside question.options. For example: const q1: Question = { message: 'first question', options: ['a', 'b'], default: 'a' } const q2: Question = { message: 'second question', options: ['c', 'd'], default: 'a' // I want this to give an error because 'a' is not in 'c' | 'd' } How can I achieve this?
Narrow down literal unions based on previously used values
It can be done just by using Question; however, it will be a complex type that will cause a horrible time for the compiler since it grows at the speed of power of two, and if you have more options (more than 10), the compiler will reach its limits and won't compile. Instead, I would suggest adjusting Question to accept the Option[] as a generic parameter and assign the type of the elements of that generic parameter to default: type Question<T extends Option[]> = { message: string; options: T; default: T[number]; }; Lastly, we will need a generic function that would create a question for us: const createQuestion = <T extends Option[]>(question: Question<T>) => question; Usage: const q1 = createQuestion({ message: "first question", options: ["a", "b"], default: "a", }); const q2 = createQuestion({ message: "second question", options: ["c", "d"], default: "a", // Expected error }); playground
76378693
76379413
I want to make my NavigationBar transparent. I have tried extendBody: true on Scafold with surfaceTintColor=Colors.transparent to the NavigationBar widget, but nothing changed.
How to create a transparent Material 3 NavigationBar in Flutter?
According to the document, SurfaceTintColor is the color of the surface tint overlay applied to the app bar's background color to indicate elevation. If you want to make the AppBar transparent, just use the property backgroundColor instead. Scaffold( extendBody: true, backgroundColor: Colors.white, appBar: AppBar( backgroundColor: Colors.transparent, // To make appBar transparent /// This is not necessary. You can play around /// to see surfaceTintColor when the AppBar is transaprent surfaceTintColor: Colors.redAccent, elevation: 3, title: Text(widget.title), ), ), It is also applied to NavigationBar bottomNavigationBar: NavigationBar( surfaceTintColor: Colors.amber, // not neccessary backgroundColor: Colors.transparent, destinations: [ Icon(Icons.book, color: Colors.blue,), Icon(Icons.map, color: Colors.blue,), ], ),
76378332
76379520
I am use library(tableone) to make my descriptive statistics for multiple variables This is my code: library(tableone) myVars <- c("class", "age", "Sex", "bmi", "bmi_category", "drink_freq", "smoke_yn", "edu_dummy") catVars <- c("class", "Sex", "bmi_category", "drink_freq", "smoke_yn", "edu_dummy") tab1_inf <- CreateTableOne(vars = myVars, strata = "NEWDI", data = TKA_table1, factorVars = catVars) a1 <- print(tab1_inf, exact = "NEWDI", showAllLevels = TRUE) This it default for percentage, and I want change it format like this(example): I checked its description and found no options to set. https://rdrr.io/cran/tableone/man/print.TableOne.html How can I do it?
How to use tableone to change table percentage by row?
With some clever getting-your-hands dirty, you can manipulate the percentages in the TableOne object. This uses an example dataset called pbc from survival package. library(tableone) library(survival) data(pbc) ## Make categorical variables factors varsToFactor <- c("status","trt","ascites","hepato","spiders","edema","stage") pbc[varsToFactor] <- lapply(pbc[varsToFactor], factor) ## Create a variable list vars <- c("time","status","age","sex","ascites","hepato", "spiders","edema","bili","chol","albumin", "copper","alk.phos","ast","trig","platelet", "protime","stage") ## Create Table 1 stratified by trt tableOne <- CreateTableOne(vars = vars, strata = c("trt"), data = pbc) tableOne Before Stratified by trt 1 2 p test n 158 154 time (mean (SD)) 2015.62 (1094.12) 1996.86 (1155.93) 0.883 status (%) 0.894 0 83 (52.5) 85 (55.2) 1 10 ( 6.3) 9 ( 5.8) 2 65 (41.1) 60 (39.0) age (mean (SD)) 51.42 (11.01) 48.58 (9.96) 0.018 sex = f (%) 137 (86.7) 139 (90.3) 0.421 ascites = 1 (%) 14 ( 8.9) 10 ( 6.5) 0.567 hepato = 1 (%) 73 (46.2) 87 (56.5) 0.088 spiders = 1 (%) 45 (28.5) 45 (29.2) 0.985 ... You should try to adapt the following code for your own data format: for (i in 1:length(table1)) { sum = tableOne$CatTable[[1]][[i]]$freq + tableOne$CatTable[[2]][[i]]$freq tableOne$CatTable[[1]][[i]]$percent = tableOne$CatTable[[1]][[i]]$freq / sum tableOne$CatTable[[2]][[i]]$percent = tableOne$CatTable[[2]][[i]]$freq / sum } } tableOne After Stratified by trt 1 2 p test n 158 154 time (mean (SD)) 2015.62 (1094.12) 1996.86 (1155.93) 0.883 status (%) 0.894 0 83 (0.5) 85 (0.5) 1 10 (0.5) 9 (0.5) 2 65 (0.5) 60 (0.5) age (mean (SD)) 51.42 (11.01) 48.58 (9.96) 0.018 sex = f (%) 137 (0.5) 139 (0.5) 0.421 ascites = 1 (%) 14 (0.6) 10 (0.4) 0.567 hepato = 1 (%) 73 (0.5) 87 (0.5) 0.088 spiders = 1 (%) 45 (0.5) 45 (0.5) 0.985
76384509
76384598
In the code below, we have a dataset that can be read as: "two cooks cook1, cook2 are doing a competition. They have to make four dishes, each time with two given ingredients ingredient1, ingredient2. A jury has scored the dishes and the grades are stored in _score. I want to use Altair to show a graph where the x-axis is each dish (1, 2, 3, 4) and the y-axis contains the scores of the two cooks separately. This currently works but the main issue is that on hover, the tooltip does not include the score of the current point that is being hovered. import altair as alt import pandas as pd df = pd.DataFrame({ "ingredient1": ["potato", "onion", "carrot", "beet"], "ingredient2": ["tomato", "pepper", "zucchini", "lettuce"], "dish": [1, 2, 3, 4], "cook1": ["cook1 dish1", "cook1 dish2", "cook1 dish3", "cook1 dish4"], "cook1_score": [0.4, 0.3, 0.7, 0.9], "cook2": ["cook2 dish1", "cook2 dish2", "cook2 dish3", "cook2 dish4"], "cook2_score": [0.6, 0.2, 0.5, 0.6], }) value_vars = [c for c in df.columns if c.endswith("_score")] cook_names = [c.replace("_score", "") for c in value_vars] id_vars = ["dish", "ingredient1", "ingredient2",] + cook_names df_melt = df.melt(id_vars=id_vars, value_vars=value_vars, var_name="cook", value_name="score") chart = alt.Chart(df_melt).mark_circle().encode( x=alt.X("dish:O", title="Dish number"), y=alt.Y("score:Q", title="Score"), color="cook:N", tooltip=id_vars ) chart.show() I tried explicitly adding the score columns to the tooltip: tooltip=id_vars+value_vars But that yields the following error: ValueError: cook1_score encoding field is specified without a type; the type cannot be inferred because it does not match any column in the data. So how can I get altair to also show the score of (only) the currently hovered element?
Altair: showing the value of the current point in the tooltip
cook1_score is not a column in df_melt, which is why you see the error. Setting tooltip=id_vars+['score'] will work.
76384490
76384624
I have created a simple material app in flutter with: flutter create --platforms=android,windows columntest When I run the program on Android and Windows, I get some kind of padding between the ElevatedButtons on Android, but not on Windows. Do you know where this comes from and how I can make the design consistent? The behavior seems to occur only with buttons (TextButton, OutlinedButton, ElevatedButton). I have also tested this with container (with border), there it does not occur. Here the code from the small app: import 'package:flutter/material.dart'; void main() { runApp(const MyApp()); } class MyApp extends StatelessWidget { const MyApp({super.key}); @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', home: Scaffold( body: Center( child: Column( crossAxisAlignment: CrossAxisAlignment.center, mainAxisAlignment: MainAxisAlignment.center, children: [ ElevatedButton(child: const Text("Foobar1"), onPressed: () {}), ElevatedButton(child: const Text("Foobar2"), onPressed: () {}), ], ), ), ), ); } } Here is a screenshot at runtime: Here my flutter version: $ flutter --version Flutter 3.10.0 • channel stable • https://github.com/flutter/flutter.git Framework • revision 84a1e904f4 (3 weeks ago) • 2023-05-09 07:41:44 -0700 Engine • revision d44b5a94c9 Tools • Dart 3.0.0 • DevTools 2.23.1 My Android Emulator is an: Pixel_3a_API_33_x86_64 But the behaviour also occurs on my physical Pixel 6 (with android UpsideDownCake) I look forward to your responses. best regards Michael
Flutter: Inconsistent column padding on Buttons between Android and Windows
So, this implementation is done by flutter. This is behaviour is because of the ThemeData.materialTapTargetSize parameter for the MaterialApp. This feature decides what should be touchable dimensions of Material Button, in your case ElevatedButton. You have 2 potential solutions Change padding from ElevatedButton like below ElevatedButton( onPressed: () {}, style: const ButtonStyle(padding: MaterialStatePropertyAll(EdgeInsets.zero)), child: const Icon(Icons.abc), ), Change value from material app MaterialApp( title: 'Flutter Demo', theme: ThemeData( primarySwatch: Colors.blue, materialTapTargetSize: MaterialTapTargetSize.shrinkWrap), home: CupertinoPickerExample(), ) Reference : https://stackoverflow.com/a/67580951
76378581
76379917
In my Mojolicious Controller, I have: my @promise; foreach my $code (\&doit1, \&doit2,) { my $prom = Mojo::Promise->new; Mojo::IOLoop->subprocess( sub { my $r = $code->("Hello"); return $r; }, sub { my ($subprocess, $err, @res) = @_; return $prom->reject($err) if $err; $prom->resolve(@res); }, ); push @promise, $prom; } Mojo::Promise ->all(@promise) ->then( sub { my ($result1, $result2) = map {$_->[0]} @_; }); This works, and I can pass arguments (e.g. Hello) to my sub. Now I converted doti1() and doit2() as helpers. So the code looks like: foreach my $code (sub {$self->myhelper->doit1("Goodbye")}, sub {$self->myhelper->doit2("Good night")}, ) { my $prom = Mojo::Promise->new; Mojo::IOLoop->subprocess( sub { my $r = $code->("Hello"); # this is ignored? return $r; }, sub { my ($subprocess, $err, @res) = @_; return $prom->reject($err) if $err; $prom->resolve(@res); }, ); push @promise, $prom; } How can I continue to pass the same set of arguments inside the loop (e.g. Hello), without having to specify them in each code ref (i.e. avoid Goodbye & Good night)? I like the idea of passing the same arguments for each code ref: $code->("Hello")
Perl Mojolicious: Passing arguments to a code ref
Now I converted doti1() and doit2() as helpers. So the code looks like: foreach my $code (sub {$self->myhelper->doit1("Goodbye")}, sub {$self->myhelper->doit2("Good night")}, ) { #.... } Yes but you are calling the helpers from another anonymous sub, How can I continue to pass the same set of arguments inside the loop (e.g. Hello), without having to specify them in each code ref so to recover the argument and pass it on the the helper, you just do: foreach my $code (sub {my $arg = shift; $self->myhelper->doit1($arg)}, sub {my $arg = shift; $self->myhelper->doit2($arg)}, ) {...} or more generally as @Dada pointed out in the comments: foreach my $code (sub {$self->myhelper->doit1(@_)}, sub {$self->myhelper->doit2(@_)}, ) {...}
76378589
76381943
I need to retrieve the attributes of a certificate that is stored in the keychain on my Mac from the command line. I can collect them manually from the Keychain Access app, but I want to do that with a script. I used the security command to get a certificate and "grep" to inspect the "subject" section: security find-certificate -c "Apple Development" login.keychain | grep "subj" and then got the following output (some omitted by "..."). "subj"<blob>=0x3081943...553 "0\201\2241\0320\03...02US" In the output above, what format is the data following "subj"<blob>= and how can I parse it? I found that decoding the first half of the hexadecimal sequence(0x30...) with UTF-8 yields the second half of the string (0\201...), but I don't know what 0\201\2241\... means. I have tried other character codes, but they just give me garbled characters.
How can I parse the certificate information output from the security command in Mac?
As for the format, the certificates are stored in DER/PEM format, which is a representation of ASN.1 encoded data. What you see in the output is the hexadecimal representation of the ASN.1 binary data. The blob indicates that the value or attribute is stored as binary data. As for exporting (for certificates), I would highly recommend combining security with openssl as follows: security find certificate -p -c "Apple Development" login.keychain | openssl x509 -noout -subject The -p option in the security command exports the found certificate in PEM format, which is something openssl can use. You can then pipe the PEM data into the openssl command, where one can easily extract the subject using the -subject option. You can check out both the man page of security and the man page of openssl x509.
76384082
76384628
I have two variables that I'm trying to model the relationship between and extract the residuals. The relationship between the two variables is clearly a non-linear exponential relationship. I've tried a few different approaches with nls, but I keep getting different error messages. # dataset df <- structure(list(y = c(464208.56, 334962.43, 361295.68, 426535.68, 258843.93, 272855.46, 166322.72, 244695.28, 227003.03, 190728.4, 156025.45, 72594.24, 56911.4, 175328.95, 161199.76, 152520.77, 190610.57, 60734.34, 31620.9, 74518.86, 45524.49, 2950.58, 2986.38, 15961.77, 12484.05, 6828.41, 2511.72, 1656.12, 5271.4, 7550.66, 3357.71, 3620.43, 3699.85, 3337.56, 4106.55, 3526.66, 2996.79, 1649.89, 4561.64, 1724.25, 3877.2, 4426.69, 8557.61, 6021.61, 6074.17, 4072.77, 4032.95, 5280.16, 7127.22), x = c(39.23, 38.89, 38.63, 38.44, 38.32, 38.27, 38.3, 38.4, 38.56, 38.79, 39.06, 39.36, 39.68, 40.01, 40.34, 40.68, 41.05, 41.46, 41.93, 42.48, 43.14, 43.92, 44.84, 45.9, 47.1, 48.4, 49.78, 51.2, 52.62, 54.01, 55.31, 56.52, 57.6, 58.54, 59.33, 59.98, 60.46, 60.78, 60.94, 60.92, 60.71, 60.3, 59.69, 58.87, 57.86, 56.67, 55.33, 53.87, 52.33)), row.names = c(NA, -49L), class = c("tbl_df", "tbl", "data.frame"), na.action = structure(c(`1` = 1L, `51` = 51L), class = "omit")) # initial model m <- nls(y ~ a * exp(r * x), start = list(a = 0.5, r = -0.2), data = df) Error in nls(y ~ a * exp(r * x), start = list(a = 0.5, r = -0.2), data = df, : singular gradient # add term for alg m <- nls(y ~ a * exp(r * x), start = list(a = 0.5, r = -0.2), data = df, alg = "plinear") Error in nls(y ~ a * exp(r * x), start = list(a = 0.5, r = -0.2), data = df, : step factor 0.000488281 reduced below 'minFactor' of 0.000976562
error messages fitting a non-linear exponential model between two variables
log-Gaussian GLM As @Gregor Thomas suggests you could linearize your problem (fit a log-linear regression), at the cost of changing the error model. (Basic model diagnostics, i.e. a scale-location plot, suggest that this would be a much better statistical model!) However, you can do this efficiently without changing the error structure by fitting a log-link Gaussian GLM: m1 <- glm(y ~ x, family = gaussian(link = "log"), data = df) The model is y ~ Normal(exp(b0 + b1*x), s), so a = exp(b0), r = b1. I tried using list(a=exp(coef(m1)[1]), r=coef(m1)[2]) as starting values, but even this was too finicky for nls(). There are two ways to get nls to work. shifted exponential As @GregorThomas suggests, shifting the x-axis to x=38 also works fine (given a sensible starting value): m <- nls(y ~ a * exp(r * (x-38)), start = list(a = 3e5, r = -0.35), data = df) provide nls with a gradient The deriv function will generate a function with the right structure for nls (returns the objective function, with a ".grad" attribute giving a vector of derivatives) if you ask it nicely. (I'm also using the exponentiated intercept from the log-Gaussian GLM as a starting value ...) f <- deriv( ~ a*exp(r*x), c("a", "r"), function.arg = c("x", "a", "r")) m2 <- nls(y ~ f(x, a, r), start = list(a = exp(coef(m1)[1]), r = -0.35), data = df) We can plot these to compare the predictions (visually identical): par(las = 1, bty = "l") xvec <- seq(38, 60, length = 101) plot(y ~ x, df) lines(xvec, predict(m1, newdata = data.frame(x=xvec), type = "response"), col = 2) lines(xvec, predict(m, newdata = data.frame(x=xvec)), col = 4, lty = 2) lines(xvec, predict(m2, newdata = data.frame(x=xvec)), col = 5, lty = 2) With a little bit of extra work (exponentiating the intercept for the Gaussian GLM, shifting the x-origin back to zero for the nls fit) we can compare the coefficients (only equal up to a tolerance of 2e-4 but that should be good enough, right?) a1 <- exp(coef(m1)[[1]]) a2 <- coef(m)[[1]]*exp(-38*coef(m)[[2]]) all.equal(c(a = a1, r = coef(m)[[2]]), c(a = a2, r = coef(m1)[[2]]), tolerance = 1e-4) all.equal(c(a = a1, r = coef(m)[[2]]), coef(m2), tolerance = 2e-4)
76382271
76382378
I'm trying to insert the data inside a forall loop. For this case, I cannot use a temporary variable and set result of the function beforehand. The function just maps a number to a string: create or replace function GetInvoiceStatus(status number) return nvarchar2 as begin case status when 0 then return 'New'; when 200 then return 'Sent'; when 300 then return 'Accepted'; end case; return ''; end; when I call this function like: select GetInvoiceStatus(200) from dual; I get the appropriate result. However, when I try to insert the data I get errors. The forall insert: forall i in 1.. INVOICE_DATA.COUNT insert into "InvoiceAudit" ("PropertyName", "OldValue", "NewValue" ( VALUES ('Status', (GetInvoiceStatus(invoice_data(i).status)), ((GetInvoiceStatus((select "Status" from "Invoice" where "InvoiceId" = invoice_data(i).invoiceId))))); However, I get the following error: [2023-06-01 15:02:57] [65000][6592] [2023-06-01 15:02:57] ORA-06592: CASE not found while executing CASE statement [2023-06-01 15:02:57] ORA-06512: at "PUBLIC.GETINVOICESTATUS", line 9 [2023-06-01 15:02:57] ORA-06512: at "PUBLIC.INVOICESSP", line 63 [2023-06-01 15:02:57] Position: 5 I have double checked, and the results from invoice_data(i).Status and the other select value are both valid parameters (and have their cases covered) and return appropriate string when called outside the stored procedure. Is the syntax somewhere wrong? I would like to remain using forall if at all possible because it is much faster than a regular for loop.
Function call as a parameter inside insert values statement
This error means that the parameter value (status) is not one of the cases in the case expression (which are 0, 200, 300). If you executed this code select GetInvoiceStatus(555) as dd from dual you will get the same error. So, add ELSE clause like this: create or replace function GetInvoiceStatus(status number) return nvarchar2 as begin case status when 0 then return 'New'; when 200 then return 'Sent'; when 300 then return 'Accepted'; else return ''; end case; end;
76384531
76384635
I have a spreadsheet where I have an importrange and vlookup to another file where its looking up to a pivot table. Some data is blank in the pivot table and when I lookup in the formula, I have a result of blank even though I have set it to return to 0 by iferror. Here's my formula: =iferror(VLOOKUP(A5,importrange("12PaJfEC7Q7gOcCx2zlMHG3YybQuk1TSsNjZDw26qFRg","Converted Pivot!A:E"),3,false),0)
pivot returning blank instead of 0 google sheet
You may try: =let(Σ,ifna(vlookup(A5,importrange("12PaJfEC7Q7gOcCx2zlMHG3YybQuk1TSsNjZDw26qFRg","Converted Pivot!A:E"),3,),"no_match_found"), if(Σ="",0,Σ)) blank_value will now be shown as 0 & a non-match output error will be prompted with no_match_found
76380577
76381169
I am trying to make a layout with: A header (gray block in the snippet) A body (lime borrder) Main body content ( blocks with red border) If you scroll horizontally, then the header should not scroll, it should be full width and stay in view. If you scroll vertically, then the header should scroll off the page as usual. The height of the header is dynamic, and fits the content within it (this SO answer works with a fixed height).. The <main> element is allowed to be wider than the viewport, but the header is always the viewport width. The reason I dont add max-width: 100%; overflow-x: auto on the <main> element (like this SO answer, is because then the horizontal scroll appears at the bottom of the element, and then say one is reading the first block, and you wish to scroll horizontally, you have to scroll to the bottom of the main element to see the horizontal scroll bar, scroll to the side, then scroll back up. I wish to have the horizontal scroll bar always present if main is wider than the view port. I have tried position: sticky/fixed on the header but could not get it to work. I would prefer not to use JavaScript if possible. header { padding: 32px; background: gray; width: 100%; } main { border: 2px solid lime; min-width: 100%; } div { height: 200px; width: 120%; /* make it overflow horizontally */ display: flex; align-items: center; justify-content: center; border: 2px solid red; } <header>The Header should not scroll horizntally<br>(is dynamic height)</header> <main> <div>content 1</div> <div>content 2</div> <div>content 3</div> <div>content 4</div> <div>content 5</div> <div>content 6</div> </main>
Make an element not scroll horizontally
What I have done here is make header sticky to the left part of the screen. Its parent element must be aware of size of your content to allow header to move. So I set body min-width to min-content and same with main so it can transfer its children's size to body. You also may notice I used box-sizing: border-box; in the header, its so padding size is taken into account when element size is calculated(100vw in this case). You don´t want to use % on header width because it won´t have room to slide. Also div sizes must not be dependent on parent size, so you can´t use % here either. body{ min-width: min-content; } header { box-sizing: border-box; position: sticky; left: 0; padding: 32px; background: gray; width: 100vw; } main { min-width: min-content; border: 2px solid lime; } div { height: 200px; width: 120vw; /* make it overflow horizontally */ display: flex; align-items: center; justify-content: center; border: 2px solid red; } <body> <header>The Header should not scroll horizntally<br>(is dynamic height)</header> <main> <div>content 1</div> <div>content 2</div> <div>content 3</div> <div>content 4</div> <div>content 5</div> <div>content 6</div> </main> </body>
76382239
76382400
I'm trying to create a small web application using Svelte. One of the requirements is to be able to change the application "theme" on demand, for example - dark theme, light theme, high contrast, and so on. I've been using an online mixin snippet to help me with that - https://medium.com/@dmitriy.borodiy/easy-color-theming-with-scss-bc38fd5734d1 However, this doesn't work consistently, and I often get errors like: [vite-plugin-svelte] /path/to/svelte/component.svelte:61:0 Unused CSS selector "main.default-theme div.some.element.identification" even tho the selector is used and is receiving it's non-themed attributes. Inside a themes.scss file: @mixin themify($themes) { @each $theme, $map in $themes { main.#{$theme}-theme & { $theme-map: () !global; @each $key, $submap in $map { $value: map-get(map-get($themes, $theme), '#{$key}'); $theme-map: map-merge($theme-map, ($key: $value)) !global; } @content; $theme-map: null !global; } } } @function themed($key) { @return map-get($theme-map, $key); } $themes: ( default: ( strokeColor: green, fillColor: red, ), ); and inside another scss file that is importing themes.scss: div.some.element.identification { some-non-themed-attribute: some-value; @include themify($themes) { stroke: themed('strokeColor'); fill: themed('fillColor'); } } now the punchline - when using this methodology, some elements are receiving their appropriate themed attributes, and others dont. I am also seeing the following error: [vite-plugin-svelte] /path/to/svelte/component.svelte:61:0 Unused CSS selector "main.default-theme div.some.element.identification" the issue doesn't seem to be in the css selectors - since the elements that dont receive the themed attributes, still receive the other non-themed attributes in the same css clause. Two final observations - When I'm building the project (using vite build), I can see that the css asset file being created doesn't include the css selectors that are missing their themed attributes. When i'm using the devtools to locate the supposedly unused selectors (whose themed attributes are not present), they can be found - despite the error message. I've been trying different way to solve this issue and nothing works consistently. Thank you in advance for your help!
"Unused CSS selector" when using a SASS themify mixin with Svelte and Vite:
You could try checking these different items: If you use svelte-preprocess, try to add scss: { prependData: `@import 'src/styles/theme.scss';` } or whatever the path to your theme is, to the config object. If it still does not work, maybe try to swap svelte-preprocess with vite-preprocess Disable any potential css purge plugin
76384567
76384661
I learned 2 ways of inserting elements into a vector. And I've been wondering which way is faster since I'm working with time limits. Method 1: int n; cin>>n; vector<int> v(n); for(int i = 0;i<n;i++){ cin>>v[i]; } Method 2: int n; cin>>n; vector<int> v; for(int i = 0;i<n;i++){ int x; cin>>x; v.push_back(x); } If you have a better method to suggest, it'd be appreciated!
Is it faster to use push_back(x) or using an index (capacity)?
Both have issues: You should be using reserve(n) int n; cin >> n; vector<int> v; v.reserve(n); for(int i = 0; i < n; ++i){ int x; cin >> x; v.emplace_back(x); } In the first version: Setting size. Here you have the issue that you are constructing all the elements in the array. Now for integers this may be insignificant. But if we extend this to non integer types that have a constructor that needs to be called for each element and then you are using the assignment operator to copy over them. The second option: push_back Here you run into the risk of the underlying storage being reallocated (potentially multiple times). Each time you re-allocate you need to copy the data from the old storage to the new storage. Again this hurts for integers but really hurts for types with constructors and destructors. Prefer: emplace_back() Rather than pushing where you need a fully constructed object. You can use emplace_back and pass in the objects used to construct the object. This allows the vector to construct the object in place. If you have simple integers or classes with effecient move semantics then not an issue but worth it as a general habit.

This dataset is scraped from Stack-Overflow.

Loading the dataset

You can load the dataset like this:

from datasets import load_dataset
dataset = load_dataset("satoshi-2000/llms-suitable")
Downloads last month
2
Edit dataset card